Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics
Published in arXiv:2402.10340, 2024
Abstract
In this paper, we highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications. Recent works focus on using LLMs and VLMs to improve the performance of robotics tasks, such as manipulation and navigation. Despite these improvements, analyzing the safety of such systems remains underexplored yet extremely critical. LLMs and VLMs are highly susceptible to adversarial inputs, prompting a significant inquiry into the safety of robotic systems. This concern is important because robotics operate in the physical world where erroneous actions can result in severe consequences. This paper explores this issue thoroughly, presenting a mathematical formulation of potential attacks on LLM/VLM-based robotic systems and offering experimental evidence of the safety challenges. Our empirical findings highlight a significant vulnerability: simple modifications to the input can drastically reduce system effectiveness. Specifically, our results demonstrate an average performance deterioration of 19.4% under minor input prompt modifications and a more alarming 29.1% under slight perceptual changes. These findings underscore the urgent need for robust countermeasures to ensure the safe and reliable deployment of advanced LLM/VLM-based robotic systems.
Paper | Project Website | Code |
---|---|---|
arXiv | Project Website | GitHub Code |
Please cite our work if you found it useful,
@misc{wu2024highlightingsafetyconcernsdeploying,
title={Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics},
author={Xiyang Wu and Souradip Chakraborty and Ruiqi Xian and Jing Liang and Tianrui Guan and Fuxiao Liu and Brian M. Sadler and Dinesh Manocha and Amrit Singh Bedi},
year={2024},
eprint={2402.10340},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2402.10340},
}