Toggle light / dark theme

The fusion of biological principles with technological innovation has resulted in significant advancements in artificial intelligence (AI) through the development of Brainoware. Developed by researchers at Indiana University, Bloomington, this innovative system leverages clusters of lab-raised brain cells to achieve elementary speech recognition and solve mathematical problems.

The crux of this technological leap lies in the cultivation of specialized stem cells that mature into neurons—the fundamental units of the brain. While a typical human brain comprises a staggering 86 billion neurons interconnected extensively, the team managed to engineer a minute organoid, merely a nanometer wide. This tiny but powerful structure was connected to a circuit board through an array of electrodes, allowing machine-learning algorithms to decode responses from the brain tissue.

Termed Brainoware, this amalgamation of biological neurons and computational circuits exhibited remarkable capabilities after a brief training period. It was discerned between eight subjects based on their diverse pronunciation of vowels with an accuracy rate of 78%. Impressively, Brainoware outperformed artificial networks in predicting the Henon map, a complex mathematical construct within chaotic dynamics.

For instance, the pegRNA molecules used in prime editing are difficult and expensive to chemically synthesise or laborious to clone, which hampers the crucial optimisation of prime-editing efficiency. Additionally, the reverse transcriptase (RT) enzymes used in prime editing are relatively error-prone and have low processivity, which may limit the precision and size of edits that can be introduced. Furthermore, RTs have a low affinity for dNTPs, which can impact prime-editing efficiency in non-dividing and differentiated cells.

To address these issues, two research groups led by Dr. Ben Kleinstiver at Mass General Hospital (MGH) & Harvard Medical School, and Dr. Erik Sontheimer at the RNA Therapeutics Institute (UMass Chan Medical School) have independently developed new approaches that build upon prime editing by replacing RT with another type of enzyme, namely a DNA-dependent DNA polymerase. This change permits the use of DNA instead of RNA as a template for editing, potentially addressing some of the main limitations of prime editing by allowing higher efficiency and adaptability.

A research team led by Prof. Sun Zhong at Peking University has reported an analog hardware solution for real-time compressed sensing recovery. It has been published as an article titled, “In-memory analog solution of compressed sensing recovery in one step” in Science Advances.

In this work, a design based on a resistive memory (also known as memristor) array for performing instantaneous matrix-matrix-vector multiplication (MMVM) is introduced. Based on this module, an analog matrix computing circuit that solves compressed sensing (CS) in one step (within a few microseconds) is disclosed.

CS has been the cornerstone of modern signal and , across many important fields such as , wireless communications, object tracking, and single-pixel cameras. In CS, sparse signals can be highly undersampled in the front-end sensor, which breaks through the Nyquist rate and thus significantly improving sampling efficiency.

A game-changer in prosthetics has been introduced to the world, and for the first time, amputees are regaining sensation through an electrical signal from their prosthetic arm. Max Ortiz-Catalan, a professor of bionics, explains the process of implanting these mind-controlled bionic arms through direct skeletal attachment. The researcher takes us through every step of this groundbreaking advancement in bionic medicine, from surgically implanting electrodes to fitting the prosthesis and training for everyday use.\r\
\r\
Director: Lisandro Perez-Rey\r\
Editor: Jordan Calig\r\
Expert: Prof. Max Ortiz Catalan\r\
Line Producer: Joseph Buscemi\r\
Associate Producer: Kameryn Hamilton\r\
Production Manager: D. Eric Martinez\r\
Production Coordinator: Fernando Davila\r\
Post Production Supervisor: Alexa Deutsch\r\
Post Production Coordinator: Ian Bryant\r\
Supervising Editor: Doug Larsen\r\
Assistant Editor: Justin Symonds\
\
Still haven’t subscribed to WIRED on YouTube? ►► http://wrd.cm/15fP7B7 \r\
Listen to the Get WIRED podcast ►► https://link.chtbl.com/wired-ytc-desc\r\
Want more WIRED? Get the magazine ►► https://subscribe.wired.com/subscribe…\r\
\r\
Follow WIRED:\r\
Instagram ►► / wired \r\
Twitter ►► / wired \r\
Facebook ►► / wired \r\
Tik Tok ►► / wired \r\
\r\
Get more incredible stories on science and tech with our daily newsletter: https://wrd.cm/DailyYT\r\
\r\
Also, check out the free WIRED channel on Roku, Apple TV, Amazon Fire TV, and Android TV. \r\
\r\
ABOUT WIRED\r\
WIRED is where tomorrow is realized. Through thought-provoking stories and videos, WIRED explores the future of business, innovation, and culture.

Harvard researchers have realized a key milestone in the quest for stable, scalable quantum computing, an ultra-high-speed technology that will enable game-changing advances in a variety of fields, including medicine, science, and finance.

The team, led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in physics and co-director of the Harvard Quantum Initiative, has created the first programmable, logical quantum processor, capable of encoding up to 48 logical qubits, and executing hundreds of logical gate operations, a vast improvement over prior efforts.

Published in Nature, the work was performed in collaboration with Markus Greiner, the George Vasmer Leverett Professor of Physics; colleagues from MIT; and QuEra Computing, a Boston company founded on technology from Harvard labs.

A team of international scientists have developed an ultra-high speed signal processor that can analyze 400,000 real time video images concurrently, according to a paper published in Communications Engineering.

The team, led by Swinburne University of Technology’s Professor David Moss, have developed a processor that operates more than 10,000 times faster than typical electronic processors that operate in Gigabyte/s, at a record 17 Terabits/s (trillion bits per second).

The technology has for the safety and efficiency of driverless cars, and could help find beyond our solar system.

A “chaperone” molecule that slows the formation of certain proteins reversed disease signs, including memory impairment, in a mouse model of Alzheimer’s disease, according to a study from researchers at the Perelman School of Medicine at the University of Pennsylvania.

In the study, published in Aging Biology, researchers examined the effects of a compound called 4-phenylbutyrate (PBA), a fatty-acid molecule known to work as a “chemical chaperone” that inhibits . In mice that model Alzheimer’s disease, injections of PBA helped to restore signs of normal proteostasis (the protein regulation process) in the animals’ brains while also dramatically improving their performance on a standard memory test, even when administered late in the disease course.

“By generally improving neuronal and cellular health, we can mitigate or delay ,” said study senior author Nirinjini Naidoo, Ph.D., a research associate professor of Sleep Medicine. “In addition, reducing proteotoxicity— to the cell that is caused by an accumulation of impaired and misfolded proteins—can help improve some previously lost brain functions.”