Training LLMs to self-detoxify their language
A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.
A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
This new framework leverages a model’s reasoning abilities to create a “smart assistant” that finds the optimal solution to multistep problems.
New research could allow a person to correct a robot’s actions in real-time, using the kind of feedback they’d give another human.
A first history of the document security technology, co-authored by MIT Libraries’ Jana Dambrogio, provides new tools for interdisciplinary research.
Engineers developed a planning tool that can help independent entities decide when they should invest in joint projects.
MIT engineers propose a new “local electricity market” to tap into the power potential of homeowners’ grid-edge devices.
A new study shows LLMs represent different data types based on their underlying meaning and reason about data in their dominant language.
ReviveMed uses AI to gather large-scale data on metabolites — molecules like lipids, cholesterol, and sugar — to match patients with therapeutics.
Whitehead Institute and CSAIL researchers created a machine-learning model to predict and generate protein localization, with implications for understanding and remedying disease.
Accenture Fellow Shreyaa Raghavan applies machine learning and optimization methods to explore ways to reduce transportation sector emissions.
Assistant Professor Sara Beery is using automation to improve monitoring of migrating salmon in the Pacific Northwest.
MIT CSAIL Principal Research Scientist Una-May O’Reilly discusses how she develops agents that reveal AI models’ security weaknesses before hackers do.
Sometimes, it might be better to train a robot in an environment that’s different from the one where it will be deployed.
Associate Professor Luca Carlone is working to give robots a more human-like awareness of their environment.