Toggle light / dark theme

By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an artificial neural network (ANN). An ANN is a computing system consisting of multiple input and output units, much like the biological brain.


Artificial neural networks modeled on human brain connectivity can effectively perform complex cognitive tasks.

We combined a machine learning algorithm with knowledge gleaned from hundreds of biological experiments to develop a technique that allows biomedical researchers to figure out the functions of the proteins that turn genes on and off in cells, called transcription factors. This knowledge could make it easier to develop drugs for a wide range of diseases.

Early on during the COVID-19 pandemic, scientists who worked out the genetic code of the RNA molecules of cells in the lungs and intestines found that only a small group of cells in these organs were most vulnerable to being infected by the SARS-CoV-2 virus. That allowed researchers to focus on blocking the virus’s ability to enter these cells. Our technique could make it easier for researchers to find this kind of information.

The biological knowledge we work with comes from this kind of RNA sequencing, which gives researchers a snapshot of the hundreds of thousands of RNA molecules in a cell as they are being translated into proteins. A widely praised machine learning tool, the Seurat analysis platform, has helped researchers all across the world discover new cell populations in healthy and diseased organs. This machine learning tool processes data from single-cell RNA sequencing without any information ahead of time about how these genes function and relate to each other.

A new study shows that artificial intelligence networks based on human brain connectivity can perform cognitive tasks efficiently.

By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an (ANN). An ANN is a computing system consisting of multiple input and output units, much like the biological brain. A team of researchers from The Neuro (Montreal Neurological Institute-Hospital) and the Quebec Artificial Intelligence Institute trained the ANN to perform a cognitive memory task and observed how it worked to complete the assignment.

This is a unique approach in two ways. Previous work on brain connectivity, also known as connectomics, focused on describing brain organization, without looking at how it actually performs computations and functions. Secondly, traditional ANNs have arbitrary structures that do not reflect how real brain networks are organized. By integrating brain connectomics into the construction of ANN architectures, researchers hoped to both learn how the wiring of the brain supports specific cognitive skills, and to derive novel design principles for artificial networks.

Dean’s appearance at TED comes during a time when critics—including current Google employees —are calling for greater scrutiny over big tech’s control over the world’s AI systems. Among those critics was one who spoke right after Dean at TED. Coder Xiaowei R. Wang, creative director of the indie tech magazine Logic, argued for community-led innovations. “Within AI there is only a case for optimism if people and communities can make the case themselves, instead of people like Jeff Dean and companies like Google making the case for them, while shutting down the communities [that] AI for Good is supposed to help,” she said. (AI for Good is a movement that seeks to orient machine learning toward solving the world’s most pressing social equity problems.)

TED curator Chris Andersen and Greg Brockman, co-founder of the AI ethics research group Open AI, also wrestled with the unintended consequences of powerful machine learning systems at the end of the conference. Brockman described a scenario in which humans serve as moral guides to AI. “We can teach the system the values we want, as we would a child,” he said. “It’s an important but subtle point. I think you do need the system to learn a model of the world. If you’re teaching a child, they need to learn what good and bad is.”

There also is room for some gatekeeping to be done once the machines have been taught, Anderson suggested. “One of the key issues to keeping this thing on track is to very carefully pick the people who look at the output of these unsupervised learning systems,” he said.