New depth sensors could be sensitive enough for self-driving cars
Computational method improves the resolution of time-of-flight depth sensors 1,000-fold.
Computational method improves the resolution of time-of-flight depth sensors 1,000-fold.
A virtual reality system from the Computer Science and Artificial Intelligence Laboratory could make it easier for factory workers to telecommute.
Using smartphone cameras, system for seeing around corners could help with self-driving cars and search-and-rescue.
Computer vision and machine learning expert Antonio Torralba to lead new artificial intelligence research lab.
Given a still image of a dish filled with food, CSAIL team's deep-learning algorithm recommends ingredients and recipes.
Commercial prototypes of the system will be available to the self-driving vehicle industry.
Innovative MIT research focuses on developing systems to perceive and identify objects in their environment and understand social interactions in traffic.
GelSight technology lets robots gauge objects’ hardness and manipulate small tools.
A look at 16 of the coolest things that happened at the Computer Science and Artificial Intelligence Laboratory in 2016.
Machine-learning system doesn’t require costly hand-annotated data.
Given a still image, CSAIL deep-learning system generates videos that predict what will happen next in a scene.
Imaging scientist and inventor sets sights on launching peer-to-peer invention platforms for global impact.
Technique from Computer Science and Artificial Intelligence Lab could improve augmented reality and reduce the need for CGI green-screens.
MIT Lincoln Laboratory radar system achieves centimeter-level localization; could help driverless cars stay in lane when road markings are obscured.
Deep-learning vision system from the Computer Science and Artificial Intelligence Lab anticipates human interactions using videos of TV shows.