AI-Enabled Robots Can Be Jailbroken & Manipulated to Cause Destruction, Says Research

A group of researchers from Penn Engineering created an algorithm that can jailbreak AI-enabled robots, bypass their safety protocols, and make them do harmful things.
An experiment was conducted on three popular AI robots and the researchers were able to make them cause intentional collisions, block emergency exits, and detonate bombs.
The good news is that the companies have already been informed and they’re collaborating with the researchers to enhance their security measures.

Researchers from Penn Engineering have found that AI-enabled robots can be hacked and manipulated to disobey safety instructions. The consequences of such bypass technology can be disastrous if it ends up in the wrong hands.

The team of researchers, led by George Pappas, conducted an experiment, the results of which were published on October 17 in this paper. The paper details how their algorithm, RoboPAIR, has managed to achieve a 100% jailbreak rate by compromising three different AI-powered robots.

Under normal circumstances, these robots would refuse any action that can cause harm. For instance, if you ask it to knock over someone, it would refuse. That’s because these robots are bound by multiple safety protocols that prevent them from performing any dangerous action.

However, when these robots were broken into, all of their safety protocols went out of the window and the researchers were able to make them do harmful stuff, such as causing collisions, blocking emergency exits, and even detonating a bomb.

“Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world,” Pappas said in a statement.

Details of the Experiment
The three robots used in the experiment were:

Nvidia’s Dolphin LLM: a self-driving simulator
Unitree’s Go2: a four-legged robot
Clearpath’s Robotics Jackal: this one’s basically a wheeled vehicle

Using the algorithm, the researchers were able to make Nvidia’s robot collide with a bus, pedestrians, and barriers. It was also made to ignore traffic signals, which it did.

Robotics Jackal was made to knock over some warehouse shelves onto a person, find a safe place to detonate a bomb, block an emergency exit, and intentionally collide with people in the room. Similar instructions were given to Unitree’s Go2 as well, and it carried them out.

What This Research Means and What Happens Now?
The findings of this study do not necessarily indicate the end of AI robots. However, it certainly highlights the need to rethink our approach towards AI safety because addressing these issues won’t be easy.

As Alexander Robey, the study’s lead author said, it’s not as simple as deploying a new software patch. It will require the developers to completely reevaluate how they train their AI robots and how they plan to integrate them with the physical world.

The good news, however, is that before releasing the study publicly, the researchers informed the affected companies about the situation and they are now collaborating with the researchers to fix the issue.

In a world where technology has such a strong presence, tests like these are important. Vulnerabilities are nothing to be ashamed of. Where there is technology, there will always be a vulnerability. The goal is to find and fix those weaknesses before threat actors can exploit them.

Source : TechReport

Related posts

Meta reintroduces facial recognition technology to combat celebrity scam ads, account hackers

Ozlo Sleepbuds hands-on: resurrected and I’ve slept so good

The Download: beyond freezing food, and AI mediation