Toggle light / dark theme

A team of AI researchers at Microsoft Research Asia has developed an AI application that converts a still image of a person and an audio track into an animation that accurately portrays the individual speaking or singing the audio track with appropriate facial expressions.

The team has published a paper describing how they created the app on the arXiv preprint server; video samples are available on the research project page.

The research team sought to animate still images talking and singing using any provided backing audio track, while also displaying believable facial expressions. They clearly succeeded with the development of VASA-1, an AI system that turns static images, whether captured by a camera, drawn, or painted, into what they describe as “exquisitely synchronized” animations.

The rapid progress of humanoid robot development is nothing short of astounding. Less than 12 months after introducing its 6th-gen general-purpose humanoid, Canada’s Sanctuary AI has pulled back the curtains on the next iteration of Phoenix.

Sanctuary has been working on a general-purpose humanoid robot for a few years now, with much of the development focus on building and training the upper torso to perform an array of tasks so we don’t have to – including putting labels on boxes, bagging groceries, moving packages, scanning products and soldering. However, it seems that most of the “Robots Doing Stuff” series of videos are actually showing the bots being teleoperated, which is how they’re taught to perform tasks.

A couple of months later, Sanctuary introduced a bipedal version called Phoenix, together with an AI control system called Carbon designed to give the humanoid “human-like intelligence and enable it to do a wide range of work tasks.” Some 11 months later, the seventh generation is ready for its time in the spotlight.

Artificial intelligence (AI) has progressed at an astounding pace over the last few years. Some scientists are now looking towards the development of artificial superintelligence (ASI) – a form of AI that would not only surpass human intelligence but would not be bound by the learning speeds of humans.

But what if this milestone isn’t just a remarkable achievement? What if it also represents a formidable bottleneck in the development of all civilizations, one so challenging that it thwarts their long-term survival?

This idea is at the heart of a research paper I recently published in Acta Astronautica. Could AI be the universe’s “great filter” – a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?

Each CRISPR system has two parts: a strand of RNA that matches the target and a protein that makes the edit. The most commonly used protein for gene editing is called “Cas9,” but scientists have discovered CRISPRs with other proteins that give them unique capabilities — while CRISPR-Cas9 slices through DNA, for example, CRISPR-Cas13 targets RNA.

Our current CRISPR gene editors are far from perfect, though. They can make edits in the wrong places or edit too few cells to make a difference, so researchers are constantly on the hunt for new CRISPR systems.

AI-designed CRISPR: Up until now, that hunt has been limited to the CRISPRs that have been discovered in nature, but Profluent has used the same types of AI models that allow ChatGPT to generate language to develop an AI platform that can generate millions of CRISPR-like proteins.

Demis Hassabis, the CEO of Google Deepmind, expects AI systems in the near future to not only answer questions but also plan and act independently.

In an interview with Bloomberg, Hassabis says his company is working on such “agent-like” systems that could be ready for use in one to two years.

“I’m really excited about the next stage of these large general models. You know, I think the next things we’re going to see perhaps this year, maybe next year, is more agent like behavior,” says Hassabis.

Summary: Researchers created the largest 3D reconstruction of human brain tissue at synaptic resolution, capturing detailed images of a cubic millimeter of human temporal cortex. This tiny piece of brain contains 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses, which amounts to 1,400 terabytes of data.

This research is part of a broader effort to map an entire mouse brain’s neural wiring, with hopes of advancing our understanding of brain function and disease. The technology combines high-resolution electron microscopy and AI-powered algorithms to meticulously color-code and map out the complex neural connections.