Toggle light / dark theme

Every piece of data that travels over the internet — from paragraphs in an email to 3D graphics in a virtual reality environment — can be altered by the noise it encounters along the way, such as electromagnetic interference from a microwave or Bluetooth device. The data are coded so that when they arrive at their destination, a decoding algorithm can undo the negative effects of that noise and retrieve the original data.

Since the 1950s, most error-correcting codes and decoding algorithms have been designed together. Each code had a structure that corresponded with a particular, highly complex decoding algorithm, which often required the use of dedicated hardware.

Researchers at MIT.

A new study has found that a material(nickel oxide, a quantum material) can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable.


For artificial intelligence to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.

A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.

The study, publishing this week in the Proceedings of the National Academy of Sciences, was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and Argonne National Laboratory.

“The dream of predicting a protein shape just from its gene sequence is now a reality,” said Paul Adams, Associate Laboratory Director for Biosciences at Berkeley Lab. For Adams and other structural biologists who study proteins, predicting their shape offers a key to understanding their function and accelerating treatments for diseases like cancer and COVID-19.

The current approaches to accurately mapping that shape, however, usually rely on complex experiments at synchrotrons. But even these sophisticated processes have their limitations—the data and quality aren’t always sufficient to understand a protein at an atomic level. By applying powerful machine learning methods to the large library of protein structures it is now possible to predict a protein’s shape from its gene sequence.

Researchers in Berkeley Lab’s Molecular Biophysics & Integrated Bioimaging Division joined an led by the University of Washington to produce a computer software tool called RoseTTAFold. The algorithm simultaneously takes into account patterns, distances, and coordinates of amino acids. As these data inputs flow in, the tool assesses relationships within and between structures, eventually helping to build a very detailed picture of a protein’s .

Summary: Machine learning algorithm produced fewer decision-making errors than professionals when it came to clinical diagnosis of patients.

Source: University of Montreal.

It’s an old adage: there’s no harm in getting a second opinion. But what if that second opinion could be generated by a computer, using artificial intelligence? Would it come up with better treatment recommendations than your professional proposes?

A new type of artificial intelligence (AI) algorithm, developed by the Mayo Clinic and the Google Research Brain Team, can potentially pave the way toward more directed brain stimulation for the treatment of Parkinson’s disease and other movement-related disorders.

According to researchers, this algorithm can more accurately determine the interaction between different regions of the brain — data that will be key for improving the way brain stimulation devices are used in the real world for treating Parkinson’s.

“Our findings show that this new type of algorithm may help us understand which brain regions directly interact with one another, which in turn may help guide placement of electrodes for stimulating devices to treat network brain diseases,” Kai Miller, MD, PhD, a neurosurgeon at Mayo Clinic and the first author of the study, said in a press release.

Now DeepMind has set its sights on another grand challenge: bridging the worlds of deep learning and classical computer science to enable deep learning to do everything. If successful, this approach could revolutionize AI and software as we know them.

Petar Veličković is a senior research scientist at DeepMind. His entry into computer science came through algorithmic reasoning and algorithmic thinking using classical algorithms. Since he started doing deep learning research, he has wanted to reconcile deep learning with the classical algorithms that initially got him excited about computer science.

Meanwhile, Charles Blundell is a research lead at DeepMind who is interested in getting neural networks to make much better use of the huge quantities of data they’re exposed to. Examples include getting a network to tell us what it doesn’t know, to learn much more quickly, or to exceed expectations.

Circa 2012


Quantum ground-state problems are computationally hard problems for general many-body Hamiltonians; there is no classical or quantum algorithm known to be able to solve them efficiently. Nevertheless, if a trial wavefunction approximating the ground state is available, as often happens for many problems in physics and chemistry, a quantum computer could employ this trial wavefunction to project the ground state by means of the phase estimation algorithm (PEA). We performed an experimental realization of this idea by implementing a variational-wavefunction approach to solve the ground-state problem of the Heisenberg spin model with an NMR quantum simulator. Our iterative phase estimation procedure yields a high accuracy for the eigenenergies (to the 10–5 decimal digit).

What we need now is an expansion of public and private investment that does justice to the opportunity at hand. Such investments may have a longer time horizon, but their eventual impact is without parallel. I believe that net-energy gain is within reach in the next decade; commercialization, based on early prototypes, will follow in very short order.

But such timelines are heavily dependent on funding and the availability of resources. Considerable investment is being allocated to alternative energy sources — wind, solar, etc. — but fusion must have a place in the global energy equation. This is especially true as we approach the critical breakthrough moment.

If laser-driven nuclear fusion is perfected and commercialized, it has the potential to become the energy source of choice, displacing the many existing, less ideal energy sources. This is because fusion, if done correctly, offers energy that is in equal parts clean, safe and affordable. I am convinced that fusion power plants will eventually replace most conventional power plants and related large-scale energy infrastructure that are still so dominant today. There will be no need for coal or gas.

A study in which machine-learning models were trained to assess over 1 million companies has shown that artificial intelligence (AI) can accurately determine whether a startup firm will fail or become successful. The outcome is a tool, Venhound, that has the potential to help investors identify the next unicorn.

It is well known that around 90% of startups are unsuccessful: Between 10% and 22% fail within their first year, and this presents a significant risk to venture capitalists and other investors in early-stage companies. In a bid to identify which companies are more likely to succeed, researchers have developed trained on the historical performance of over 1 million companies. Their results, published in KeAi’s The Journal of Finance and Data Science, show that these models can predict the outcome of a with up to 90% accuracy. This means that potentially 9 out of 10 companies are correctly assessed.

“This research shows how ensembles of non-linear machine-learning models applied to have huge potential to map large feature sets to business outcomes, something that is unachievable with traditional linear regression models,” explains co-author Sanjiv Das, Professor of Finance and Data Science at Santa Clara University’s Leavey School of Business in the US.