Toggle light / dark theme

This prompted a pair of neuroscientists to see if they could design an AI that could learn from few data points by borrowing principles from how we think the brain solves this problem. In a paper in Frontiers in Computational Neuroscience, they explained that the approach significantly boosts AI’s ability to learn new visual concepts from few examples.

“Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” Maximilian Riesenhuber, from Georgetown University Medical Center, said in a press release. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”

Several decades of neuroscience research suggest that the brain’s ability to learn so quickly depends on its ability to use prior knowledge to understand new concepts based on little data. When it comes to visual understanding, this can rely on similarities of shape, structure, or color, but the brain can also leverage abstract visual concepts thought to be encoded in a brain region called the anterior temporal lobe (ATL).

Weird, right?

The team’s critical insight was to construct a “viral language” of sorts, based purely on its genetic sequences. This language, if given sufficient examples, can then be analyzed using NLP techniques to predict how changes to its genome alter its interaction with our immune system. That is, using artificial language techniques, it may be possible to hunt down key areas in a viral genome that, when mutated, allow it to escape roaming antibodies.

It’s a seriously kooky idea. Yet when tested on some of our greatest viral foes, like influenza (the seasonal flu), HIV, and SARS-CoV-2, the algorithm was able to discern critical mutations that “transform” each virus just enough to escape the grasp of our immune surveillance system.

Moscow has revealed a plan to spend $2.4 million on a giant database containing information about every single city resident, including passport numbers, insurance policies, salaries, car registrations – and even their pets.

It will also include work and tax details, school grades, and data from their ‘Troika’ care – Moscow’s unified transport payment system, used on the metro, busses and trains.

The new proposal will undoubtedly increase fears about ever-growing surveillance in the Russian capital, where the number of facial recognition cameras has recently been increased.

The far side of the moon is poised to become our newest and best window on the hidden history of the cosmos. Over the course of the next decade, astronomers are planning to perform unprecedented observations of the early universe from that unique lunar perch using radio telescopes deployed on a new generation of orbiters and robotic rovers.

These instruments will study the universe’s initial half-billion years—the first few hundred million or so of which make up the so-called cosmic “dark ages,” when stars and galaxies had yet to form. Bereft of starlight, this era is invisible to optical observations. Radio telescopes, however, can tune in to long-wavelength, low-frequency radio emissions produced by the gigantic clouds of neutral hydrogen that then filled the universe. But these emissions are difficult, if not downright impossible, to detect from Earth because they are either blocked or distorted by our planet’s atmosphere or swamped by human-generated radio noise.

Scientists have dreamed for decades of such studies that could take place on the moon’s far side, where they would be shielded from earthly transmissions and untroubled by any significant atmosphere to impede cosmic views. Now, with multiple space agencies pursuing lunar missions, those dreams are set to become reality.

Part of the Divine Mind, and so we are.


The most recent observations at both quantum and cosmological scales are casting serious doubts on our current models. For instance, at quantum scale, the latest electronic hydrogen proton radius measurement resulted in a much smaller radius than the one predicted by the standard model of particles physics, which now is off by 4%. At cosmological scale, the amount of observations regarding black holes and galactic formation heading in the direction of a radically different cosmological model, is overwhelming. Black holes have shown being much older than their hosting galaxies, galactic formation is much younger than our models estimates, and there is evidence of at least 64 black holes aligned with respect to their axis of rotation, suggesting the presence of a large scale spatial coherence in angular momentum that is impossible to predict with our current models. Under such scenario, it should not fall as a surprise the absence of a better alternative to unify quantum theory and relativity, and thus connect the very small to the very big, than the idea that the universe is actually a neural network. And for this reason, a theory of everything would be based on it.


As explained in Targemann’s interview to Vanchurin on Futurism, the work of Vanchurin, proposes that we live in a huge neural network that governs everything around us.

“it’s a possibility that the entire universe on its most fundamental level is a neural network… With this respect it could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong”. Vitaly Vanchurin The idea was born when he was studying deep machine learning. He wrote the book “Towards a theory of machine learning”, in order to apply the methods of statistical mechanics to study the behavior of neural networks, and he saw that in certain limits the learning (or training) dynamics of neural networks is very similar to the quantum dynamics. So, he decided to explore the idea that the physical world is a neural network.

Artificial intelligence and machine learning are already an integral part of our everyday lives online. For example, search engines such as Google use intelligent ranking algorithms, and video streaming services such as Netflix use machine learning to personalize movie recommendations.

As the demands for AI online continue to grow, so does the need to speed up AI performance and find ways to reduce its energy consumption.

Now a University of Washington-led team has come up with a system that could help: an core prototype that uses phase-change material. This system is fast, energy efficient and capable of accelerating the used in AI and . The technology is also scalable and directly applicable to cloud computing.