Toggle light / dark theme

Brain-age (BA) estimates based on deep learning are increasingly used as neuroimaging biomarker for brain health; however, the underlying neural features have remained unclear. We combined ensembles of convolutional neural networks with Layer-wise Relevance Propagation (LRP) to detect which brain features contribute to BA. Trained on magnetic resonance imaging (MRI) data of a population-based study (n = 2,637, 18–82 years), our models estimated age accurately based on single and multiple modalities, regionally restricted and whole-brain images (mean absolute errors 3.37–3.86 years). We find that BA estimates capture ageing at both small and large-scale changes, revealing gross enlargements of ventricles and subarachnoid spaces, as well as white matter lesions, and atrophies that appear throughout the brain. Divergence from expected ageing reflected cardiovascular risk factors and accelerated ageing was more pronounced in the frontal lobe. Applying LRP, our study demonstrates how superior deep learning models detect brain-ageing in healthy and at-risk individuals throughout adulthood.

Finding definitive evidence for past primitive life in ancient Mars rock and soil samples may be well-nigh impossible, renowned geologist and astrobiologist Frances Westall told me at the recent Europlanet Science Congress (EPSC) in Granada, Spain. And she should know. Westall is someone who still claims the discovery of Earth’s oldest-known microfossils, dating back some 3.45-billion-years ago.

But it’s hard enough to identify primitive microfossils in Earth’s oldest rocks, much less from robotic samples taken on Mars. Thus, if we have a hard time identifying past life on Earth, what hope do we have of doing it with Mars samples?

“I think it’s going to be really difficult,” said Westall, a researcher at France’s Center for Molecular Biophysics in Orleans. “I can tell you, there’s going to be a lot of arguments about it.”

Such robotic schools could be tasked with locating and recording data on coral reefs to help researchers to study the reefs’ health over time. Just as living fish in a school might engage in different behaviours simultaneously — some mating, some caring for young, others finding food — but suddenly move as one when a predator approaches, robotic fish would have to perform individual tasks while communicating to each other when it’s time to do something different.

“The majority of what my lab really looks at is the coordination techniques — what kinds of algorithms have evolved in nature to make systems work well together?” she says.

Many roboticists are looking to biology for inspiration in robot design, particularly in the area of locomotion. Although big industrial robots in vehicle factories, for instance, remain anchored in place, other robots will be more useful if they can move through the world, performing different tasks and coordinating their behaviour.

Training the large neural networks behind many modern AI tools requires real computational might: For example, OpenAI’s most advanced language model, GPT-3, required an astounding million billion billions of operations to train, and cost about US $5 million in compute time. Engineers think they have figured out a way to ease the burden by using a different way of representing numbers.

Back in 2017, John Gustafson, then jointly appointed at A*STAR Computational Resources Centre and the National University of Singapore, and Isaac Yonemoto, then at Interplanetary Robot and Electric Brain Co., developed a new way of representing numbers. These numbers, called posits, were proposed as an improvement over the standard floating-point arithmetic processors used today.

Now, a team of researchers at the Complutense University of Madrid have developed the first processor core implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. They presented their results at last week’s IEEE Symposium on Computer Arithmetic.

Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.

Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.

For several years now, physicists have been trying to use to detect as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or , where they are often difficult to detect.

Robotics and wearable devices might soon get a little smarter with the addition of a stretchy, wearable synaptic transistor developed by Penn State engineers. The device works like neurons in the brain to send signals to some cells and inhibit others in order to enhance and weaken the devices’ memories.

Led by Cunjiang Yu, Dorothy Quiggle Career Development Associate Professor of Engineering Science and Mechanics and associate professor of biomedical engineering and of and engineering, the team designed the synaptic transistor to be integrated in robots or wearables and use to optimize functions. The details were published Sept. 29 in Nature Electronics.

“Mirroring the human brain, robots and using the synaptic transistor can use its to ‘learn’ and adapt their behaviors,” Yu said. “For example, if we burn our hand on a stove, it hurts, and we know to avoid touching it next time. The same results will be possible for devices that use the synaptic transistor, as the artificial intelligence is able to ‘learn’ and adapt to its environment.”