System lets robots identify an object’s properties through handling
With a novel simulation method, robots can guess the weight, softness, and other physical properties of an object just by picking it up.
With a novel simulation method, robots can guess the weight, softness, and other physical properties of an object just by picking it up.
“IntersectionZoo,” a benchmarking tool, uses a real-world traffic problem to test progress in deep reinforcement learning algorithms.
New type of “state-space model” leverages principles of harmonic oscillators.
Using diagrams to represent interactions in multipart systems can provide a faster way to design software improvements.
Researchers have created a unifying framework that can help scientists combine existing ideas to improve AI models or create new ones.
A new technique automatically guides an LLM toward outputs that adhere to the rules of whatever programming language or other format is being used.
By eliminating redundant computations, a new data-driven method can streamline processes like scheduling trains, routing delivery drivers, or assigning airline crews.
A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
This new framework leverages a model’s reasoning abilities to create a “smart assistant” that finds the optimal solution to multistep problems.
New research could allow a person to correct a robot’s actions in real-time, using the kind of feedback they’d give another human.
A first history of the document security technology, co-authored by MIT Libraries’ Jana Dambrogio, provides new tools for interdisciplinary research.
Engineers developed a planning tool that can help independent entities decide when they should invest in joint projects.
MIT engineers propose a new “local electricity market” to tap into the power potential of homeowners’ grid-edge devices.
A new study shows LLMs represent different data types based on their underlying meaning and reason about data in their dominant language.