Enabling small language models to solve complex reasoning tasks
The “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
The “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.
The new certificate program will equip naval officers with skills needed to solve the military’s hardest problems.
The technique can help scientists in economics, public health, and other fields understand whether to trust the results of their experiments.
By stacking multiple active components based on new materials on the back end of a computer chip, this new approach reduces the amount of energy wasted during computation.
The speech-to-reality system combines 3D generative AI and robotic assembly to create objects on demand.
This new technique enables LLMs to dynamically adjust the amount of computation they use for reasoning, based on the difficulty of the question.
MIT CSAIL and LIDS researchers developed a mathematically grounded system that lets soft robots deform, adapt, and interact with people and objects, without violating safety limits.
Large language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.
BoltzGen generates protein binders for any biological target from scratch, expanding AI’s reach from understanding biology toward engineering it.
MIT neuroscientists find a surprising parallel in the ways humans and new AI models solve complex problems.
MIT researchers developed a way to identify the smallest dataset that guarantees optimal solutions to complex problems.
Associate Professor Phillip Isola studies the ways in which intelligent machines “think,” in an effort to safely integrate AI into human society.
The MIT Quantum Initiative is taking shape, leveraging quantum breakthroughs to drive the future of scientific and technological progress.
MIT PhD students who interned with the MIT-IBM Watson AI Lab Summer Program are pushing AI tools to be more flexible, efficient, and grounded in truth.
The coding framework uses modular concepts and simple synchronization rules to make software clearer, safer, and easier for LLMs to generate.