Researchers create the first artificial vision system for both land and water
Inspired by a fiddler crab eye, scientists developed an amphibious artificial vision system with a panoramic visual field.
Inspired by a fiddler crab eye, scientists developed an amphibious artificial vision system with a panoramic visual field.
The MIT Mobility Initiative welcomes five inaugural industry members to advance safe, clean, and inclusive mobility.
The second AI Policy Forum Symposium convened global stakeholders across sectors to discuss critical policy questions in artificial intelligence.
MIT scientists unveil the first open-source simulation engine capable of constructing realistic environments for deployable training and testing of autonomous vehicles.
A new general-purpose optimizer can speed up the design of walking robots, self-driving vehicles, and other autonomous systems.
A new technique can safely guide an autonomous robot without knowledge of its environmental conditions or the size, shape, or location of obstacles it might encounter.
Graduate student Sarah Cen explores the interplay between humans and artificial intelligence systems, to help build accountability and trust.
Researchers use artificial intelligence to help autonomous vehicles avoid idling at red lights.
A new machine-learning system may someday help driverless cars predict the next moves of nearby drivers, cyclists, and pedestrians in real-time.
In MIT Mobility Forum talk, experts discuss a future for vehicle automation that lets technology and drivers interact.
In collaboration with industry representatives, Momentum students tackle wildfire suppression and search-and-rescue missions while building soft skills.
MIT ocean and mechanical engineers are using advances in scientific computing to address the ocean’s many challenges, and seize its opportunities.
A levitating vehicle might someday explore the moon, asteroids, and other airless planetary surfaces.
Assistant professor of civil engineering describes her career in robotics as well as challenges and promises of human-robot interactions.
Deep-learning methods confidently recognize images that are nonsense, a potential problem for medical and autonomous-driving decisions.