Wired
Using a new technique developed to examine the risks of multimodal large language models used in robots, MIT researchers were able to have a “simulated robot arm do unsafe things like knocking items off a table or throwing them by describing actions in ways that the LLM did not recognize as harmful and reject,” writes Will Knight for Wired. “With LLMs a few wrong words don’t matter as much,” explains Prof. Pulkit Agrawal. “In robotics a few wrong actions can compound and result in task failure more easily.”