Toggle light / dark theme

Russian scientists have proposed a new algorithm for automatic decoding and interpreting the decoder weights, which can be used both in brain-computer interfaces and in fundamental research. The results of the study were published in the Journal of Neural Engineering.

Brain-computer interfaces are needed to create robotic prostheses and neuroimplants, rehabilitation simulators, and devices that can be controlled by the power of thought. These devices help people who have suffered a stroke or physical injury to move (in the case of a robotic chair or prostheses), communicate, use a computer, and operate household appliances. In addition, in combination with machine learning methods, neural interfaces help researchers understand how the human brain works.

Most frequently brain-computer interfaces use electrical activity of neurons, measured, for example, with electro-or magnetoencephalography. However, a special decoder is needed in order to translate neuronal signals into commands. Traditional methods of signal processing require painstaking work on identifying informative features—signal characteristics that, from a researcher’s point of view, appear to be most important for the decoding task.

For me, the concern was just how easy it was to do. A lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere. If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets. So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.


AI could be just as effective in developing biochemical weapons as it is in identifying helpful new drugs, researchers warn.

Researchers have developed a mind-reading system for decoding neural signals from the brain during arm movement. The method, described in the journal Applied Soft Computing, can be used by a person to control a robotic arm through a brain-machine interface (BMI).

A BMI is a device that translates into commands to control a machine, such as a computer or a robotic limb. There are two main techniques for monitoring neural signals in BMIs: electroencephalography (EEG) and electrocorticography (ECoG).

The EEG exhibits signals from on the surface of the scalp and is widely employed because it is non-invasive, relatively cheap, safe and easy to use. However, the EEG has low spatial resolution and detects irrelevant neural signals, which makes it difficult to interpret the intentions of individuals from the EEG.

NexStem, a MedTech and robotics startup that creates non-invasive robotic solutions controlled exclusively by a user’s thoughts, today announced the finalization of its latest round of funding and the general availability of its NexStem Headsets and Wisdom-SDK (software development kit). This pioneer in the development of advanced end-to-end Brain-Computer Interfaces (BCIs) devices and applications, has cracked the code on improving the quality of the electroencephalography (EEG) signals harnessed by BCIs — a critical next step in inserting the human into the metaverse.

Physicists have discovered a new way to coat soft robots in materials that allow them to move and function in a more purposeful way. The research, led by the University of Bath, is described in a paper published on March 11, 2022, in Science Advances.

Authors of the study believe their breakthrough modeling on ‘active matter’ could mark a turning point in the design of robots. With further development of the concept, it may be possible to determine the shape, movement, and behavior of a soft solid not by its natural elasticity but by human-controlled activity on its surface.

For almost a century, we’ve been intrigued and sometimes terrified by the big questions of artificial intelligence. Will computers ever become truly intelligent? Will the time come when machines can operate without human intervention? What would happen if a machine developed a conscience?

In this episode of Perspectives, six experts in the fields of robotics, sci-fi, and philosophy discuss breakthroughs in the development of AI that are both good, as well as a bit worrisome.

Clips in this video are from the following series on Wondrium:

Mind-Body Philosophy, presented by Patrick Grim.