Toggle light / dark theme

Imperial researchers have found that variability between brain cells might speed up learning and improve the performance of the brain and future artificial intelligence (AI).

The new study found that by tweaking the electrical properties of individual cells in simulations of brain networks, the networks learned faster than simulations with identical cells.

They also found that the networks needed fewer of the tweaked cells to get the same results and that the method is less energy-intensive than models with identical cells.

Full Story:


A slew of new studies now shows that the area of the brain responsible for initiating this action — the primary motor cortex, which controls movement — has as many as 116 different types of cells that work together to make this happen.

The 17 studies, appearing online Oct. 6 in the journal Nature, are the result of five years of work by a huge consortium of researchers supported by the National Institutes of Health’s Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative to identify the myriad of different cell types in one portion of the brain. It is the first step in a long-term project to generate an atlas of the entire brain to help understand how the neural networks in our head control our body and mind and how they are disrupted in cases of mental and physical problems.

“If you think of the brain as an extremely complex machine, how could we understand it without first breaking it down and knowing the parts?” asked cellular neuroscientist Helen Bateup, a University of California, Berkeley, associate professor of molecular and cell biology and co-author of the flagship paper that synthesizes the results of the other papers. “The first page of any manual of how the brain works should read: Here are all the cellular components, this is how many of them there are, here is where they are located and who they connect to.”

The Sustainable Development Goals (SDGs) include a call for action to halve the annual rate of road deaths globally and ensure access to safe, affordable, and sustainable transport for everyone by 2030.

According to the newly launched initiative, faster progress on AI is vital to make this happen, especially in low and middle-income countries, where the most lives are lost on the roads each year.

According to the World Health Organization (WHO), approximately 1.3 million people die annually as a result of road traffic crashes. Between 20 and 50 million more suffer non-fatal injuries, with many incurring a disability.

Full Story:


Join us on Patreon!
https://www.patreon.com/MichaelLustgartenPhD

Papers referenced in the video:
Dietary Intakes of Eggs and Cholesterol in Relation to All-Cause and Heart Disease Mortality: A Prospective Cohort Study.
https://pubmed.ncbi.nlm.nih.gov/32400247/

Associations of Dietary Cholesterol or Egg Consumption With Incident Cardiovascular Disease and Mortality.
https://pubmed.ncbi.nlm.nih.gov/30874756/

Live human brain tissue — generously donated by brain surgery patients with epilepsy or tumors — is yielding incredible #neuroscience insights. A study on cells… See More.


As part of an international effort to map cell types in the brain, scientists identified increased diversity of neurons in regions of the human brain that expanded during our evolution.

New research by a City College of New York team has uncovered a novel way to combine two different states of matter. For one of the first times, topological photons—light—has been combined with lattice vibrations, also known as phonons, to manipulate their propagation in a robust and controllable way.

The study utilized topological photonics, an emergent direction in photonics which leverages fundamental ideas of the mathematical field of topology about conserved quantities—topological invariants—that remain constant when altering parts of a geometric object under continuous deformations. One of the simplest examples of such invariants is number of holes, which, for instance, makes donut and mug equivalent from the topological point of view. The topological properties endow photons with helicity, when photons spin as they propagate, leading to unique and unexpected characteristics, such as robustness to defects and unidirectional propagation along interfaces between topologically distinct materials. Thanks to interactions with vibrations in crystals, these helical photons can then be used to channel along with vibrations.

The implications of this work are broad, in particular allowing researchers to advance Raman spectroscopy, which is used to determine vibrational modes of molecules. The research also holds promise for vibrational spectroscopy—also known as —which measures the interaction of infrared radiation with matter through absorption, emission, or reflection. This can then be utilized to study and identify and characterize .

Google AI Introduces FLAN: An Instruction-Tuned Generalizable Language (NLP) Model To Perform Zero-Shot Tasks


To generate meaningful text, a machine learning model needs a lot of knowledge about the world and should have the ability to abstract them. While language models that have been trained to accomplish this are becoming increasingly capable of acquiring this knowledge automatically as they grow, it is unclear how to unlock this knowledge and apply it to specific real-world activities.

Fine-tuning is one well-established method for doing so. It involves training a pretrained model like BERT or T5 on a labeled dataset to adjust it to a downstream job. However, it has a large number of training instances and stored model weights for each downstream job, which is not always feasible, especially for large models.

A recent Google study looks into a simple technique known as instruction fine-tuning, sometimes known as instruction tuning. This entails fine-tuning a model to make it more receptive to performing NLP (Natural language processing) tasks in general rather than a specific task.

Circa 2020


Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.

“While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”

Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.