Toggle light / dark theme

Patients with narrowing of at least 50% in three major coronary arteries did equally well when treated with a minimally invasive stent placement guided either by ultrasound-based imaging or by a novel, artificial-intelligence-powered (AI), non-invasive imaging technique derived from angiography, researchers reported at the American College of Cardiology’s Annual Scientific Session (ACC.25) on March 30 in Chicago. The work was simultaneously published in The Lancet.

“This is the first such study to be conducted in patients with angiographically significant lesions,” said Jian’an Wang, MD, a professor in the Heart Center at The Second Affiliated Hospital of Zhejiang University School of Medicine in Hangzhou, China, and the study’s senior author. “Patients whose evaluation was non-invasively guided by the novel, AI-powered technique underwent approximately 10% fewer procedures, and their outcomes were comparable with those for patients whose evaluation was guided by a commonly used ultrasound-based imaging technique.”

The study, known as FLAVOUR II, met its primary endpoint, a composite of death, a heart attack or need for a repeat procedure at one year, Wang said.

Marking a breakthrough in the field of brain-computer interfaces (BCIs), a team of researchers from UC Berkeley and UC San Francisco has unlocked a way to restore naturalistic speech for people with severe paralysis.

This work solves the long-standing challenge of latency in speech neuroprostheses, the time lag between when a subject attempts to speak and when sound is produced. Using recent advances in artificial intelligence-based modeling, the researchers developed a streaming method that synthesizes brain signals into audible speech in near-real time.

As reported in Nature Neuroscience, this technology represents a critical step toward enabling communication for people who have lost the ability to speak.

Researchers have developed “infomorphic neurons” that learn independently, mimicking their biological counterparts more accurately than previous artificial neurons. A team of researchers from the Göttingen Campus Institute for Dynamics of Biological Networks (CIDBN) at the University of Göttingen and the Max Planck Institute for Dynamics and Self-Organization (MPI-DS) has programmed these infomorphic neurons and constructed artificial neural networks from them.

The special feature is that the individual artificial neurons learn in a self-organized way and draw the necessary information from their immediate environment in the network. Their findings are published in the journal Proceedings of the National Academy of Sciences.

Both the and modern are extremely powerful. At the lowest level, the neurons work together as rather simple computing units.

Without the ability to control infrared light waves, autonomous vehicles wouldn’t be able to quickly map their environment and keep “eyes” on the cars and pedestrians around them; augmented reality couldn’t display realistic 3D displays; doctors would lose an important tool for early cancer detection. Dynamic light control allows for upgrades to many existing systems, but complexities associated with fabricating programmable thermal devices hinder availability.

A new active metasurface, the electrically-programmable graphene field effect transistor (Gr-FET), from the labs of Sheng Shen and Xu Zhang in Carnegie Mellon University’s College of Engineering, enables the control of mid-infrared states across a wide range of wavelengths, directions, and polarizations. This enhanced control enables advancements in applications ranging from infrared camouflage to personalized health monitoring.

“For the first time, our active metasurface devices exhibited the monolithic integration of the rapidly modulated temperature, addressable pixelated imaging, and resonant infrared spectrum,” said Xiu Liu, postdoctoral associate in mechanical engineering and lead author of the paper published in Nature Communications. “This breakthrough will be of great interest to a wide range of infrared photonics, , biophysics, and thermal engineering audiences.”

Over the past year, AI researchers have found that when AI chatbots such as ChatGPT find themselves unable to answer questions that satisfy users’ requests, they tend to offer false answers. In a new study, as part of a program aimed at stopping chatbots from lying or making up answers, a research team added Chain of Thought (CoT) windows. These force the chatbot to explain its reasoning as it carries out each step on its path to finding a final answer to a query.

They then tweaked the to prevent it from making up answers or lying about its reasons for making a given choice when it was seen doing so through the CoT window. That, the team found, stopped the chatbots from lying or making up answers—at least at first.

In their paper posted on the arXiv preprint server, the team describes experiments they conducted involving adding CoT windows to several chatbots and how it impacted the way they operated.

Like a bumblebee flitting from flower to flower, a new insect-inspired flying robot created by engineers at the University of California, Berkeley, can hover, change trajectory and even hit small targets. Less than 1 centimeter in diameter, the device weighs only 21 milligrams, making it the world’s smallest wireless robot capable of controlled flight.

“Bees exhibit remarkable aeronautical abilities, such as navigation, hovering and pollination, that artificial flying robots of similar scale fail to do,” said Liwei Lin, Distinguished Professor of Mechanical Engineering at UC Berkeley. “This flying robot can be wirelessly controlled to approach and hit a designated target, mimicking the mechanism of pollination as a bee collects nectar and flies away.”

Lin is the senior author of a new paper describing the robot that was published on Friday, March 28 in the journal Science Advances.

The key to this development is an AI-powered streaming method. By decoding brain signals directly from the motor cortex – the brain’s speech control center – the AI synthesizes audible speech almost instantly.

“Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” said Gopala Anumanchipalli, co-principal investigator of the study.

Anumanchipalli added, “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.”