Toggle light / dark theme

‘Machine Scientists’ Distill the Laws of Physics From Raw Data

The latest “machine scientist” algorithms can take in data on dark matter, dividing cells, turbulence, and other situations too complicated for humans to understand and provide an equation capturing the essence of what’s going on.


Despite rediscovering Kepler’s third law and other textbook classics, BACON remained something of a curiosity in an era of limited computing power. Researchers still had to analyze most data sets by hand, or eventually with Excel-like software that found the best fit for a simple data set when given a specific class of equation. The notion that an algorithm could find the correct model for describing any data set lay dormant until 2009, when Lipson and Michael Schmidt, roboticists then at Cornell University, developed an algorithm called Eureqa.

Their main goal had been to build a machine that could boil down expansive data sets with column after column of variables to an equation involving the few variables that actually matter. “The equation might end up having four variables, but you don’t know in advance which ones,” Lipson said. “You throw at it everything and the kitchen sink. Maybe the weather is important. Maybe the number of dentists per square mile is important.”

One persistent hurdle to wrangling numerous variables has been finding an efficient way to guess new equations over and over. Researchers say you also need the flexibility to try out (and recover from) potential dead ends. When the algorithm can jump from a line to a parabola, or add a sinusoidal ripple, its ability to hit as many data points as possible might get worse before it gets better. To overcome this and other challenges, in 1992 the computer scientist John Koza proposed “genetic algorithms,” which introduce random “mutations” into equations and test the mutant equations against the data. Over many trials, initially useless features either evolve potent functionality or wither away.

Meta wants to improve its AI

Would start with scanning and reverse engineering brains of rats, crows, pigs, chimps, and end on the human brain. Aim for completion by 12/31/2025. Set up teams to run brain scans 24÷7÷365 if we need to, and partner w/ every major neuroscience lab in the world.


If artificial intelligence is intended to resemble a brain, with networks of artificial neurons substituting for real cells, then what would happen if you compared the activities in deep learning algorithms to those in a human brain? Last week, researchers from Meta AI announced that they would be partnering with neuroimaging center Neurospin (CEA) and INRIA to try to do just that.

Through this collaboration, they’re planning to analyze human brain activity and deep learning algorithms trained on language or speech tasks in response to the same written or spoken texts. In theory, it could decode both how human brains —and artificial brains—find meaning in language.

By comparing scans of human brains while a person is actively reading, speaking, or listening with deep learning algorithms given the same set of words and sentences to decipher, researchers hope to find similarities as well as key structural and behavioral differences between brain biology and artificial networks. The research could help explain why humans process language much more efficiently than machines.

What is Quantum Machine Learning? Applications of Quantum Machine Learning

Quantum machine learning is a field of study that investigates the interaction of concepts from quantum computing with machine learning.

For example, we would wish to see if quantum computers can reduce the amount of time it takes to train or assess a machine learning model. On the other hand, we may use machine learning approaches to discover quantum error-correcting algorithms, estimate the features of quantum systems, and create novel quantum algorithms.

Interactive tools may help people become their own big data journalists

Interactive tools that allow online media users to navigate, save and customize graphs and charts may help them make better sense of the deluge of data that is available online, according to a team of researchers. These tools may help users identify personally relevant information, and check on misinformation, they added.

In a study advancing the concept of “news informatics,” which provides news in the form of data rather than stories, the researchers reported that people found that offered certain interactive tools—such as modality, message and source interactivity tools—to visualize and manipulate data were more engaging than ones without the tools. Modality interactivity includes tools to interact with the content, such as hyperlinks and zoom-ins, while message interactivity focuses on how the users exchange messages with the site. Source interactivity allows users to tailor the information to their individual needs and contribute their own content to the site.

However, it was not the case that more is always better, according to S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State. The user’s experience depended on how these tools were combined and how involved they are in the topic, he said.

Consciousness is the collapse of the wave function

Consciousness defines our existence. It is, in a sense, all we really have, all we really are, The nature of consciousness has been pondered in many ways, in many cultures, for many years. But we still can’t quite fathom it.

web1Why consciousness cannot have evolved

Consciousness Cannot Have Evolved Read more Consciousness is, some say, all-encompassing, comprising reality itself, the material world a mere illusion. Others say consciousness is the illusion, without any real sense of phenomenal experience, or conscious control. According to this view we are, as TH Huxley bleakly said, ‘merely helpless spectators, along for the ride’. Then, there are those who see the brain as a computer. Brain functions have historically been compared to contemporary information technologies, from the ancient Greek idea of memory as a ‘seal ring’ in wax, to telegraph switching circuits, holograms and computers. Neuroscientists, philosophers, and artificial intelligence (AI) proponents liken the brain to a complex computer of simple algorithmic neurons, connected by variable strength synapses. These processes may be suitable for non-conscious ‘auto-pilot’ functions, but can’t account for consciousness.

Finally there are those who take consciousness as fundamental, as connected somehow to the fine scale structure and physics of the universe. This includes, for example Roger Penrose’s view that consciousness is linked to the Objective Reduction process — the ‘collapse of the quantum wavefunction’ – an activity on the edge between quantum and classical realms. Some see such connections to fundamental physics as spiritual, as a connection to others, and to the universe, others see it as proof that consciousness is a fundamental feature of reality, one that developed long before life itself.

Cutting the carbon footprint of supercomputing in scientific research

Simon Portegies Zwart, an astrophysicist at Leiden University in the Netherlands, says more efficient coding is vital for making computing greener. While for mathematician and physicist Loïc Lannelongue, the first step is for computer modellers to become more aware of their environmental impacts, which vary significantly depending on the energy mix of the country hosting the supercomputer. Lannelongue, who is based at the University of Cambridge, UK, has developed Green Algorithms, an online tool that enables researchers to estimate the carbon footprint of their computing projects.

The basics of decentralized finance

Decentralized finance is built on blockchain technology, an immutable system that organizes data into blocks that are chained together and stored in hundreds of thousands of nodes or computers belonging to other members of the network.

These nodes communicate with one another (peer-to-peer), exchanging information to ensure that they’re all up-to-date and validating transactions, usually through proof-of-work or proof-of-stake. The first term is used when a member of the network is required to solve an arbitrary mathematical puzzle to add a block to the blockchain, while proof-of-stake is when users set aside some cryptocurrency as collateral, giving them a chance to be selected at random as a validator.

To encourage people to help keep the system running, those who are selected to be validators are given cryptocurrency as a reward for verifying transactions. This process is popularly known as mining and has not only helped remove central entities like banks from the equation, but it also has allowed DeFi to open more opportunities. In traditional finance, are only offered to large organizations, for members of the network to make a profit. And by using network validators, DeFi has also been able to cut down the costs that intermediaries charge so that management fees don’t eat away a significant part of investors’ returns.

Deep Learning in Neuroimaging

Our brain is constantly working to make sense of the world around us and finding patterns in it, even when we are asleep the brain is storing patterns. Making sense of the brain itself, however, has remained an intricate pursuit.

Christoff Koch, a well-known neuroscientist, famously called the human brain the “most complex object in our observable universe” [1]. Aristotle, on the other hand, thought it was the heart that gave rise to consciousness and that the brain functioned as a cooling system both practically and philosophically [2]. Theories of the brain have evolved since then, generally shaped by knowledge gathered over centuries. Historically, to analyze the brain, we had to either extract the brain from deceased people or perform invasive surgery. Progress over the past decades has led to inventions that allow us to study the brain without invasive surgeries. A few examples of imaging techniques that do not require surgery include macroscopic imaging techniques such as functional magnetic resonance imaging (fMRI) or approaches with a high temporal resolution such as electroencephalogy (EEG). Advances in treatments, such as closed-loop electrical stimulation systems, have enabled the treatment of disorders like epilepsy and more recently depression [3, 4]. Existing neuroimaging approaches can produce a considerable amount of data about a very complex organ that we still do not fully understand which has led to an interest in non-linear modeling approaches and algorithms equipped to learn meaningful features.

This article provides an informal introduction to unique aspects of neuroimaging data and how we can leverage these aspects with deep learning algorithms. Specifically, this overview will first explain some common neuroimaging modalities more in-depth and then discuss applications of deep learning in conjunction with some of the unique characteristics of neuroimaging data. These unique characteristics tie into a broader movement in deep learning, namely that data understanding should be a goal in itself to maximize the impact of applied deep learning.

/* */