A new optimization framework for robot motion planning
MIT CSAIL researchers established new connections between combinatorial and continuous optimization, which can find global solutions for complex motion-planning puzzles.
MIT CSAIL researchers established new connections between combinatorial and continuous optimization, which can find global solutions for complex motion-planning puzzles.
Human Guided Exploration (HuGE) enables AI agents to learn quickly with some help from humans, even if the humans make mistakes.
Computer vision enables contact-free 3D printing, letting engineers print with high-performance materials they couldn’t use before.
The team’s new algorithm finds failures and fixes in all sorts of autonomous systems, from drone teams to power grids.
By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.
The vibrating platform could be useful for growing artificial muscles to power soft robots and testing therapies for neuromuscular diseases.
Researchers coaxed a family of generative AI models to work together to solve multistep robot manipulation problems.
Some researchers see formal specifications as a way for autonomous systems to "explain themselves" to humans. But a new study finds that we aren't understanding.
MIT engineers develop a long, curved touch sensor that could enable a robot to grasp and manipulate objects in multiple ways.
Designed to ensure safer skies, “Air-Guardian” blends human intuition with machine precision, creating a more symbiotic relationship between pilot and aircraft.
The MIT and Accenture Convergence Initiative for Industry and Technology announces new graduate fellows.
Sharmi Shah ’23 pursued Course 2-A/6, a customizable degree path that combines mechanical engineering with computer science and electrical engineering.
When he isn’t investigating human motor control, the graduate student gives back by volunteering with programs that helped him grow as a researcher.
Sharifa Alghowinem, a research scientist at the Media Lab, explores personal robot technology that explains emotions in English and Arabic.
With a new technique, a robot can reason efficiently about moving objects using more than just its fingertips.