Toggle light / dark theme

Now, a developed by Brown University bioengineers could be an important step toward such adaptive DBS. The algorithm removes a key hurdle that makes it difficult for DBS systems to sense while simultaneously delivering .

“We know that there are in the associated with disease states, and we’d like to be able to record those signals and use them to adjust neuromodulation therapy automatically,” said David Borton, an assistant professor of biomedical engineering at Brown and corresponding author of a study describing the algorithm. “The problem is that stimulation creates electrical artifacts that corrupt the signals we’re trying to record. So we’ve developed a means of identifying and removing those artifacts, so all that’s left is the signal of interest from the brain.”

From chatbots that answer tax questions to algorithms that drive autonomous vehicles and dish out medical diagnoses, artificial intelligence undergirds many aspects of daily life. Creating smarter, more accurate systems requires a hybrid human-machine approach, according to researchers at the University of California, Irvine. In a study published this month in Proceedings of the National Academy of Sciences, they present a new mathematical model that can improve performance by combining human and algorithmic predictions and confidence scores.

“Humans and machine algorithms have complementary strengths and weaknesses. Each uses different sources of information and strategies to make predictions and decisions,” said co-author Mark Steyvers, UCI professor of cognitive sciences. “We show through empirical demonstrations as well as theoretical analyses that humans can improve the predictions of AI even when human accuracy is somewhat below [that of] the AI—and vice versa. And this accuracy is higher than combining predictions from two individuals or two AI algorithms.”

To test the framework, researchers conducted an image classification experiment in which human participants and computer algorithms worked separately to correctly identify distorted pictures of animals and everyday items—chairs, bottles, bicycles, trucks. The human participants ranked their confidence in the accuracy of each image identification as low, medium or high, while the machine classifier generated a continuous score. The results showed large differences in confidence between humans and AI algorithms across images.

Amazon and Virginia Tech today announced the establishment of the Amazon – Virginia Tech Initiative for Efficient and Robust Machine Learning.

The initiative will provide an opportunity for doctoral students in the College of Engineering who are conducting AI and ML research to apply for Amazon fellowships, and it will support research efforts led by Virginia Tech faculty members. Under the initiative, Virginia Tech will host an annual public research symposium to share knowledge with the machine learning and related research communities. And in collaboration with Amazon, Virginia Tech will co-host two annual workshops, and training and recruiting events for Virginia Tech students.

“This initiative’s emphasis will be on efficient and robust machine learning, such as ensuring algorithms and models are resistant to errors and adversaries,” said Naren Ramakrishnan, the director of the Sanghani Center and the Thomas L. Phillips Professor of Engineering. “We’re pleased to continue our work with Amazon and expand machine learning research capabilities that could address worldwide industry-focused problems.”

Both quantum computing and machine learning have been touted as the next big computer revolution for a fair while now.

However, experts have pointed out that these techniques aren’t generalized tools – they will only be the great leap forward in computer power for very specialized algorithms, and even more rarely will they be able to work on the same problem.

One such example of where they might work together is modeling the answer to one of the thorniest problems in physics: How does General Relativity relate to the Standard Model?

In a paper published on February 23, 2022 in Nature Machine Intelligence, a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) introduce a robust soft haptic sensor named “Insight” that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are. The research project is a significant step toward robots being able to feel their environment as accurately as humans and animals. Like its natural counterpart, the fingertip sensor is very sensitive, robust, and high-resolution.

The thumb-shaped sensor is made of a soft shell built around a lightweight stiff skeleton. This skeleton holds up the structure much like bones stabilize the soft finger tissue. The shell is made from an elastomer mixed with dark but reflective aluminum flakes, resulting in an opaque grayish color that prevents any external light finding its way in. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera, which records colorful images, illuminated by a ring of LEDs.

When any objects touch the sensor’s shell, the appearance of the color pattern inside the sensor changes. The camera records images many times per second and feeds a deep neural network with this data. The algorithm detects even the smallest change in light in each pixel. Within a fraction of a second, the trained machine-learning model can map out where exactly the finger is contacting an object, determine how strong the forces are, and indicate the force direction. The model infers what scientists call a force map: It provides a force vector for every point in the three-dimensional fingertip.

A new study challenges the conventional approach to designing soft robotics and a class of materials called metamaterials by utilizing the power of computer algorithms. Researchers from the University of Illinois Urbana-Champaign and Technical University of Denmark can now build multimaterial structures without dependence on human intuition or trial-and-error to produce highly efficient actuators and energy absorbers that mimic designs found in nature.

The study, led by Illinois civil and environmental engineering professor Shelly Zhang, uses optimization theory and an -based design process called . Also known as digital synthesis, the builds composite structures that can precisely achieve complex prescribed mechanical responses.

The study results are published in the Proceedings of the National Academy of Sciences.

Engineers sometimes turn to nature for inspiration. Cold Spring Harbor Laboratory Associate Professor Saket Navlakha and research scientist Jonathan Suen have found that adjustment algorithms—the same feedback control process by which the Internet optimizes data traffic—are used by several natural systems to sense and stabilize behavior, including ant colonies, cells, and neurons.

Internet engineers route data around the world in small packets, which are analogous to . As Navlakha explains, “The goal of this work was to bring together ideas from and Internet design and relate them to the way forage.”

The same algorithm used by internet engineers is used by ants when they forage for food. At first, the colony may send out a single ant. When the ant returns, it provides information about how much food it got and how long it took to get it. The colony would then send out two ants. If they return with food, the colony may send out three, then four, five, and so on. But if ten ants are sent out and most do not return, then the colony does not decrease the number it sends to nine. Instead, it cuts the number by a large amount, a multiple (say half) of what it sent before: only five ants. In other words, the number of ants slowly adds up when the signals are positive, but is cut dramatically lower when the information is negative. Navlakha and Suen note that the system works even if individual ants get lost and parallels a particular type of “additive-increase/multiplicative-decrease algorithm” used on the internet.