Toggle light / dark theme

Scientists have published the most detailed data set to date on the neural connections of the brain, which was obtained from a cubic millimeter of tissue sample.


A cubic millimeter of brain tissue may not sound like much. But considering that that tiny square contains 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses, all amounting to 1,400 terabytes of data, Harvard and Google researchers have just accomplished something stupendous.

Led by Jeff Lichtman, the Jeremy R. Knowles Professor of Molecular and Cellular Biology and newly appointed dean of science, the Harvard team helped create the largest 3D brain reconstruction to date, showing in vivid detail each cell and its web of connections in a piece of temporal cortex about half the size of a rice grain.

Published in Science, the study is the latest development in a nearly 10-year collaboration with scientists at Google Research, combining Lichtman’s electron microscopy imaging with AI algorithms to color-code and reconstruct the extremely complex wiring of mammal brains. The paper’s three first co-authors are former Harvard postdoc Alexander Shapson-Coe, Michał Januszewski of Google Research, and Harvard postdoc Daniel Berger.

Researchers at Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) in collaboration with the University of Freiburg have developed a biohybrid robot, which consists of a flour-based capsule created using 3D microfabrication techniques, and two natural appendages from oat fruit capable of moving in response to air humidity.

In what has become a familiar refrain when discussing artificial intelligence (AI)-enabled technologies, voice cloning makes possible beneficial advances in accessibility and creativity while also enabling increasingly sophisticated scams and deepfakes. To combat the potential negative impacts of voice cloning technology, the U.S. Federal Trade Commission (FTC) challenged researchers and technology experts to develop breakthrough ideas on preventing, monitoring and evaluating malicious voice cloning.

Ning Zhang, an assistant professor of computer science and engineering in the McKelvey School of Engineering at Washington University in St. Louis, was one of three winners of the FTC’s Voice Cloning Challenge announced April 8. Zhang explained his winning project, DeFake, which deploys a kind of watermarking for voice recordings. DeFake embeds carefully crafted distortions that are imperceptible to the human ear into recordings, making criminal cloning more difficult by eliminating usable voice samples.

“DeFake uses a technique of adversarial AI that was originally part of the cybercriminals’ toolbox, but now we’re using it to defend against them,” Zhang said. “Voice cloning relies on the use of pre-existing speech samples to clone a voice, which are generally collected from social media and other platforms. By perturbing the recorded audio signal just a little bit, just enough that it still sounds right to human listeners, but it’s completely different to AI, DeFake obstructs cloning by making criminally synthesized speech sound like other voices, not the intended victim.”

For our final feature celebrating Women’s History Month, we interviewed Chiara Bartolozzi, a senior researcher moving the needle in neuromorphic engineering.

Every year for Women’s History Month, All About Circuits spotlights the contributions of distinguished women engineers worldwide. For this article, we interviewed Chiara Bartolozzi, a senior researcher and neuromorphic chip expert at the Italian Institute of Technology (IIT).

Since earning a degree in engineering from the University of Genova and a Ph.D. in neuroinformatics from ETH Zurich, Bartolozzi has led important research in neuromorphic engineering. She also helped design iCub, a toddler-sized humanoid robot developed at IIT that serves as a robotics testbed worldwide.

Gary Marcus’ book Kluge is about the human brain and its workings. And I have been interested in how the brain works since my undergratuate days at Allegheny College working with Pete Elias and researching the learning of mice (1968) and especially into my doctoral work with Dick King at UNC-Chapel Hill. I actually think there is only modest improvement in some aspects of what we have learned about the brain since I graduated in 1977.

But we have come a long way… In ancient Greece, thinkers like Hippocrates and Aristotle grappled with the nature of the mind and its connection to the brain. While Hippocrates believed that the brain was the seat of intelligence and consciousness, Aristotle argued that the heart was the center of reason and emotion, with the brain serving merely as a cooling mechanism. We now know that the brain actually does have some impacts on thinking for most people. (grin)

I thought to share the AI book summary produced by Perplexity when I asked it to summarize the main ideas about how the brain evolved to produce this thing we can consciousness… I slightly edited the output. As Spock would say, “Fascinating.”

A collaborative research team from NIMS and Tokyo University of Science has successfully developed a cutting-edge artificial intelligence (AI) device that executes brain-like information processing through few-molecule reservoir computing. This innovation utilizes the molecular vibrations of a select number of organic molecules. By applying this device for the blood glucose level prediction in patients with diabetes, it has significantly outperformed existing AI devices in terms of prediction accuracy.

With the expansion of machine learning applications in various industries, there’s an escalating demand for AI devices that are not only highly computational but also feature low-power consumption and miniaturization. Research has shifted towards physical reservoir computing, leveraging physical phenomena presented by materials and devices for neural information processing. One challenge that remains is the relatively large size of the existing materials and devices.