Toggle light / dark theme

Artificial intelligence advances how scientists explore materials. Researchers from Ames Laboratory and Texas A&M University trained a machine-learning (ML) model to assess the stability of rare-earth compounds. This work was supported by Laboratory Directed Research and Development Program (LDRD) program at Ames Laboratory. The framework they developed builds on current state-of-the-art methods for experimenting with compounds and understanding chemical instabilities.

Ames Lab has been a leader in rare-earths research since the middle of the 20th century. Rare earth elements have a wide range of uses including clean energy technologies, energy storage, and permanent magnets. Discovery of new rare-earth compounds is part of a larger effort by scientists to expand access to these materials.

The present approach is based on machine learning (ML), a form of artificial intelligence (AI), which is driven by computer algorithms that improve through data usage and experience. Researchers used the upgraded Ames Laboratory Rare Earth database (RIC 2.0) and high-throughput density-functional theory (DFT) to build the foundation for their ML model.

Engineers at the University of Cincinnati have developed a promising electrochemical system to convert emissions from chemical and power plants into useful products while addressing climate change.

UC College of Engineering and Applied Science assistant professor Jingjie Wu and his students used a two-step cascade reaction to convert to and then into , a chemical used in everything from food packaging to tires.

“The world is in a transition to a low-carbon economy. Carbon dioxide is primarily emitted from energy and chemical industries. We convert carbon dioxide into ethylene to reduce the .” Wu said. “The research idea is inspired by the basic principle of the plug flow reactor. We borrowed the reactor design principle in our segmented electrodes design for the two-stage conversion.”

Russian scientists have proposed a new algorithm for automatic decoding and interpreting the decoder weights, which can be used both in brain-computer interfaces and in fundamental research. The results of the study were published in the Journal of Neural Engineering.

Brain-computer interfaces are needed to create robotic prostheses and neuroimplants, rehabilitation simulators, and devices that can be controlled by the power of thought. These devices help people who have suffered a stroke or physical injury to move (in the case of a robotic chair or prostheses), communicate, use a computer, and operate household appliances. In addition, in combination with machine learning methods, neural interfaces help researchers understand how the human brain works.

Most frequently brain-computer interfaces use electrical activity of neurons, measured, for example, with electro-or magnetoencephalography. However, a special decoder is needed in order to translate neuronal signals into commands. Traditional methods of signal processing require painstaking work on identifying informative features—signal characteristics that, from a researcher’s point of view, appear to be most important for the decoding task.

For me, the concern was just how easy it was to do. A lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere. If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets. So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.


AI could be just as effective in developing biochemical weapons as it is in identifying helpful new drugs, researchers warn.

Researchers in Japan have developed a diamond FET with high hole mobility.


In the 1970s, Stephen Hawking found that an isolated black hole would emit radiation but only when considered quantum mechanics. This is known as black hole evaporation because the black hole shrinks. However, this led to the black hole information paradox.

If the black hole evaporates entirely, physical information would permanently disappear in a black hole. However, this violates a core precept of quantum physics: the information cannot vanish from the Universe.

A new study by an international quartet of physicists suggests that black holes are more complex than originally understood. They have a gravitational field that, at the quantum level, encodes information about how they were formed.

If the black hole evaporates entirely, physical information would permanently disappear in a black hole. However, this violates a core precept of quantum physics: the information cannot vanish from the Universe.

A new study by an international quartet of physicists suggests that black holes are more complex than originally understood. They have a gravitational field that, at the quantum level, encodes information about how they were formed.

The research team includes Professor Xavier Calmet from the University of Sussex School of Mathematical and Physical Sciences, Professor Roberto Casadio (INFN, University of Bologna), Professor Stephen Hsu (Michigan State University), along with Ph.D. student Folkert Kuipers (University of Sussex). Their study significantly improves understanding of black holes and resolves a problem that has confounded scientists for nearly half a century; the black hole information paradox.

Researchers have developed a mind-reading system for decoding neural signals from the brain during arm movement. The method, described in the journal Applied Soft Computing, can be used by a person to control a robotic arm through a brain-machine interface (BMI).

A BMI is a device that translates into commands to control a machine, such as a computer or a robotic limb. There are two main techniques for monitoring neural signals in BMIs: electroencephalography (EEG) and electrocorticography (ECoG).

The EEG exhibits signals from on the surface of the scalp and is widely employed because it is non-invasive, relatively cheap, safe and easy to use. However, the EEG has low spatial resolution and detects irrelevant neural signals, which makes it difficult to interpret the intentions of individuals from the EEG.