Toggle light / dark theme

A new way to make graphs more accessible to blind and low-vision readers

Bar graphs and other charts provide a simple way to communicate data, but are, by definition, difficult to translate for readers who are blind or low-vision. Designers have developed methods for converting these visuals into “tactile charts,” but guidelines for doing so are extensive (for example, the Braille Authority of North America’s 2022 guidebook is 426 pages long). The process also requires understanding different types of software, as designers often draft their chart in programs like Adobe Illustrator and then translate it into Braille using another application.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have now developed an approach that streamlines the design process for tactile chart designers. Their program, called “Tactile Vega-Lite,” can take data from something like an Excel spreadsheet and turn it into both a standard visual chart and a touch-based one. Design standards are hardwired as default rules within the program to help educators and designers automatically create accessible tactile charts.

The tool could make it easier for blind and low-vision readers to understand many graphics, such as a bar chart comparing minimum wages across states or a line graph tracking countries’ GDPs over time. To bring your designs to the real world, you can tweak your chart in Tactile Vega-Lite and then send its file to a Braille embosser (which prints text as readable dots).

Study shows people in Japan treat robots and AI agents more respectfully than people in Western societies

Imagine an automated delivery vehicle rushing to complete a grocery drop-off while you are hurrying to meet friends for a long-awaited dinner. At a busy intersection, you both arrive at the same time. Do you slow down to give it space as it maneuvers around a corner? Or do you expect it to stop and let you pass, even if normal traffic etiquette suggests it should go first?

“As becomes a reality, these everyday encounters will define how we share the road with intelligent machines,” says Dr. Jurgis Karpus from the Chair of Philosophy of Mind at LMU. He explains that the arrival of fully automated self-driving cars signals a shift from us merely using —like Google Translate or ChatGPT—to actively interacting with them. The key difference? In busy traffic, our interests will not always align with those of the self-driving cars we encounter. We have to interact with them, even if we ourselves are not using them.

In a study published recently in the journal Scientific Reports, researchers from LMU Munich and Waseda University in Tokyo found that people are far more likely to take advantage of cooperative artificial agents than of similarly cooperative fellow humans. “After all, cutting off a robot in traffic doesn’t hurt its feelings,” says Karpus, lead author of the study.

‘Super-Turing AI’ uses less energy by mimicking the human brain

Artificial Intelligence (AI) can perform complex calculations and analyze data faster than any human, but to do so requires enormous amounts of energy. The human brain is also an incredibly powerful computer, yet it consumes very little energy.

As increasingly expand, a new approach to AI’s “thinking,” developed by researchers including Texas A&M University engineers, mimics the and has the potential to revolutionize the AI industry.

Dr. Suin Yi, assistant professor of electrical and computer engineering at Texas A&M’s College of Engineering, is on a team of researchers that developed “Super-Turing AI,” which operates more like the human brain. This new AI integrates certain processes instead of separating them and then migrating huge amounts of data like current systems do.

Brain-like computer steers rolling robot with 0.25% of the power needed by conventional controllers

A smaller, lighter and more energy-efficient computer, demonstrated at the University of Michigan, could help save weight and power for autonomous drones and rovers, with implications for autonomous vehicles more broadly.

The autonomous controller has among the lowest power requirements reported, according to the study published in Science Advances. It operates at a mere 12.5 microwatts—in the ballpark of a pacemaker.

In testing, a rolling robot using the controller was able to pursue a target zig-zagging down a hallway with the same speed and accuracy as with a conventional digital controller. In a second trial, with a lever-arm that automatically repositioned itself, the new controller did just as well.

Cracking the code of private AI: The role of entropy in secure language models

Large Language Models (LLMs) have rapidly become an integral part of our digital landscape, powering everything from chatbots to code generators. However, as these AI systems increasingly rely on proprietary, cloud-hosted models, concerns over user privacy and data security have escalated. How can we harness the power of AI without exposing sensitive data?

A recent study, “Entropy-Guided Attention for Private LLMs,” by Nandan Kumar Jha, a Ph.D. candidate at the NYU Center for Cybersecurity (CCS), and Brandon Reagen, Assistant Professor in the Department of Electrical and Computer Engineering and a member of CCS, introduces a novel approach to making AI more secure.

The paper was presented at the AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI 25) in early March and is available on the arXiv preprint server.

Humans as hardware: Computing with biological tissue

Most computers run on microchips, but what if we’ve been overlooking a simpler, more elegant computational tool all this time? In fact, what if we were the computational tool?

As crazy as it sounds, a future in which humans are the ones doing the computing may be closer than we think. In an article published in IEEE Access, Yo Kobayashi from the Graduate School of Engineering Science at the University of Osaka demonstrates that living tissue can be used to process information and solve complex equations, exactly as a computer does.

This achievement is an example of the power of the computational framework known as , in which data are input into a complex “reservoir” that has the ability to encode rich patterns. A computational model then learns to convert these patterns into meaningful outputs via a neural network.

Liquid robot can transform, separate and fuse like living cells

A joint research team has successfully developed a next-generation soft robot based on liquid. The research was published in Science Advances.

Biological cells possess the ability to deform, freely divide, fuse, and capture foreign substances. Research efforts have long been dedicated to replicating these unique capabilities in artificial systems. However, traditional solid-based robots have faced limitations in effectively mimicking the flexibility and functionality of living cells.

To overcome these challenges, the joint research team successfully developed a particle-armored liquid robot, encased in unusually dense hydrophobic (water-repelling) particles.

Scientists develop dog-inspired robot that runs without motors

Scientists from TU Delft and EPFL have created a quadruped robot capable of running like a dog without the need for motors. This achievement, a product of combining innovative mechanics with data-driven technology, was published in Nature Machine Intelligence and could pave the way for energy-efficient robotics.

“Commercial quadruped robots are becoming more common, but their energy inefficiency limits their operating time,” explains Cosimo Della Santina, assistant professor at TU Delft. “Our goal was to address this issue by optimizing the robot’s mechanics by mimicking the efficiency of biological systems.”