Toggle light / dark theme

Researchers reconstruct speech from brain activity, illuminating complex neural processes

Speech production is a complex neural phenomenon that has left researchers explaining it tongue-tied. Separating out the complex web of neural regions controlling precise muscle movement in the mouth, jaw and tongue with the regions processing the auditory feedback of hearing your own voice is a complex problem, and one that has to be overcome for the next generation of speech-producing protheses.

Now, a team of researchers from New York University have made key discoveries that help untangle that web, and are using it to build vocal reconstruction technology that recreates the voices of patients who have lost their ability to speak.

The team, co-led by Adeen Flinker, Associate Professor of Biomedical Engineering at NYU Tandon and Neurology at NYU Grossman School of Medicine, and Yao Wang, Professor of Biomedical Engineering and Electrical and Computer Engineering at NYU Tandon, as well as a member of NYU WIRELESS, created and used complex neural networks to recreate speech from brain recordings, and then used that recreation to analyze the processes that drive .

Exploring parameter shift for quantum Fisher information

In a recent publication in EPJ Quantum Technology, Le Bin Ho from Tohoku University’s Frontier Institute for Interdisciplinary Sciences has developed a technique called time-dependent stochastic parameter shift in the realm of quantum computing and quantum machine learning. This breakthrough method revolutionizes the estimation of gradients or derivatives of functions, a crucial step in many computational tasks.

Typically, computing derivatives requires dissecting the function and calculating the rate of change over a small interval. But even cannot keep dividing indefinitely. In contrast, quantum computers can accomplish this task without having to discrete the function. This feature is achievable because quantum computers operate in a realm known as “quantum space,” characterized by periodicity, and no need for endless subdivisions.

One way to illustrate this concept is by comparing the sizes of two on a map. To do this, one might print out maps of the schools and then cut them into . After cutting, these pieces can be arranged into a line, with their total length compared (see Figure 1a). However, the pieces may not form a perfect rectangle, leading to inaccuracies. An infinite subdivision would be required to minimize these errors, an impractical solution, even for classical computers.

Tens of Thousands of People Can Now Order a Waymo Robotaxi Anywhere in San Francisco

On Monday, Waymo announced on X that it’s expanding its city-wide, fully autonomous robotaxi service to thousands more riders in San Francisco.

The company had been testing a service area of nearly the whole city (around 47 square miles) with employees and, later, a group of test riders. But most people using the service were precluded from riding in the city’s dense northeast corner, an area including Fisherman’s Wharf, the Embarcadero, and Chinatown.

Now, the full San Francisco service area will be available to all current Waymo One users—amounting to tens of thousands of people, according to TechCrunch. While it’s a significant increase, not just anyone can use Waymo in SF yet. The company has been growing the service by admitting new riders from a waitlist that numbered 100,000 in June.

The Cosmic Tapestry: Universal Consciousness and the Big Bang

From the vast expanse of galaxies that paint our night skies to the intricate neural networks within our brains, everything we know and see can trace its origins back to a singular moment: the Big Bang. It’s a concept that has not only reshaped our understanding of the universe but also offers profound insights into the interconnectedness of all existence.

Imagine, if you will, the entire universe compressed into an infinitesimally small point. This is not a realm of science fiction but the reality of our cosmic beginnings. Around 13.8 billion years ago, a singular explosion gave birth to time, space, matter, and energy. And in that magnificent burst of creation, the seeds for everything — galaxies, stars, planets, and even us — were sown.

But what if the Big Bang was not just a physical event? What if it also marked the birth of a universal consciousness? A consciousness that binds every particle, every star, and every living being in a cosmic tapestry of shared experience and memory.

Bioprinted Skin Heals Wounds in Pigs With Minimal Scarring—Humans Are Next

Given these perks, it’s no wonder scientists have tried recreating skin in the lab. Artificial skin could, for example, cover robots or prosthetics to give them the ability to “feel” temperature, touch, or even heal when damaged.

It could also be a lifesaver. The skin’s self-healing powers have limits. People who suffer from severe burns often need a skin transplant taken from another body part. While effective, the procedure is painful and increases the chances of infection. In some cases, there might not be enough undamaged skin left. A similar dilemma haunts soldiers wounded in battle or those with inherited skin disorders.

Recreating all the skin’s superpowers is tough, to say the least. But last week, a team from Wake Forest University took a large step towards artificial skin that heals large wounds when transplanted into mice and pigs.

Are we ready to trust AI with our bodies?

Over the next few years, artificial intelligence is going to have a bigger and bigger effect on the way we live.

I hate going to the gym. Last year I hired a personal trainer for six months in the hope she would brainwash me into adopting healthy exercise habits longer-term. It was great, but personal trainers are prohibitively expensive, and I haven’t set foot in a gym once since those six months came to an end.

That’s why I was intrigued when I read my colleague Rhiannon Williams’s latest piece about AI gym trainers.

Yepic fail: This startup promised not to make deepfakes without consent, but did anyway

U.K.-based startup Yepic AI claims to use “deepfakes for good” and promises to “never reenact someone without their consent.” But the company did exactly what it claimed it never would.

In an unsolicited email pitch to a TechCrunch reporter, a representative for Yepic AI shared two “deepfaked” videos of the reporter, who had not given consent to having their likeness reproduced. Yepic AI said in the pitch email that it “used a publicly available photo” of the reporter to produce two deepfaked videos of them speaking in different languages.

The reporter requested that Yepic AI delete the deepfaked videos it created without permission.

Anysphere raises $8M from OpenAI to build an AI-powered IDE

Anysphere, a startup building what it describes as an “AI-native” software development environment, called Cursor, today announced that it raised $8 million in seed funding led by OpenAI’s Startup Fund with participation from former GitHub CEO Nat Friedman, Dropbox co-founder Arash Ferdowsi and other angel investors.

The new cash, which brings Anysphere’s total raised to $11 million, will be put toward hiring and supporting Anysphere’s AI and machine learning research, co-founder and CEO Michael Truell said.

“In the next several years, our mission is to make programming an order of magnitude faster, more fun and creative,” Truell told TechCrunch in an email interview. “Our platform enables all developers to build software faster.”

/* */