“I expect vision-language models to play a major role in future embodied AI systems,” said Dr. Alvaro Cardenas.
How can misleading texts negatively affect AI behavior? This is what a recently submitted study hopes to address as a team of researchers from the University of California, Santa Cruz and Johns Hopkins University investigated the potential security risks of embodied AI, which is AI fixed in a physical body that uses observations to adapt to its environment, as opposed to using text and data, and include cars and robots. This study has the potential to help scientists, engineers, and the public better understand the risks for AI and the steps to take to mitigate them.
For the study, the researchers introduced CHAI (Command Hijacking against embodied AI), which is designed to combat outside threats to embodied AI systems, including misleading text and imagery. Instead, CHAI employs counterattacks that embodied Ais can use to disseminate right from wrong regarding text and images. The researchers tested CHAI on a variety of AI-based systems, including drone emergency landing, autonomous driving, aerial object tracking, and robotic vehicles. In the end, the researchers discovered that CHAI successfully identified incoming attacks while emphasizing the need for enhancing security measures for embodied AI.
