Improving AI models’ ability to explain their predictions
A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
A new approach could help users know whether to trust a model’s predictions in safety-critical applications like health care and autonomous driving.
The approach could help engineers tackle extremely complex design problems, from power grid optimization to vehicle design.
In 16.85 (Design and Testing of Autonomous Vehicles), AeroAstro students build software that allows autonomous flight vehicles to navigate unknown environments.
By leveraging idle computing time, researchers can double the speed of model training while preserving accuracy.
To help generative AI models create durable, real-world accessories and decor, the PhysiOpt system runs physics simulations and makes subtle tweaks to its 3D blueprints.
By providing holistic information on a cell, an AI-driven method could help scientists better understand disease mechanisms and plan experiments.
By enabling two chips to authenticate each other using a shared fingerprint, this technique can improve privacy and energy efficiency.
A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance.
An AI control system co-developed by SMART researchers enables soft robotic arms to learn a broad set of motions once and adapt instantly to changing conditions without retraining.
By minimizing the need to drive around looking for a parking spot, this technique can save drivers up to 35 minutes — and give them a realistic estimate of total travel time.
The context of long-term conversations can cause an LLM to begin mirroring the user’s viewpoints, possibly reducing accuracy or creating a virtual echo-chamber.
Removing just a tiny fraction of the crowdsourced data that informs online ranking platforms can significantly change the results.
EnCompass executes AI agent programs by backtracking and making multiple attempts, finding the best set of outputs generated by an LLM. It could help coders work with AI agents more efficiently.
He joins Nikos Trichakis in guiding the cross-cutting initiative of the MIT Schwarzman College of Computing.
Torralba’s research focuses on computer vision, machine learning, and human visual perception.