What jumps out in a photo changes the longer we look
Researchers capture our shifting gaze in a model that suggests how to prioritize visual information based on viewing duration.
Researchers capture our shifting gaze in a model that suggests how to prioritize visual information based on viewing duration.
Researchers test how far artificial intelligence models can go in dreaming up varied poses and colors of objects and animals in photos.
Professor Aleksander Madry strives to build machine-learning models that are more reliable, understandable, and robust.
Computer model of face processing could reveal how the brain produces richly detailed visual representations so quickly.
Weather’s a problem for autonomous cars. MIT’s new system shows promise by using “ground-penetrating radar” instead of cameras or lasers.
Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects.
Objects are posed in varied positions and shot at odd angles to spur new AI techniques.
A new computational imaging method could change how we view hidden information in scenes.
Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI.
System from MIT CSAIL sizes up drivers as selfish or selfless. Could this help self-driving cars navigate in traffic?
Drones can fly at high speeds to a destination while keeping safe “backup” plans if things go awry.
Commercial cloud service providers give artificial intelligence computing at MIT a boost.
Two longtime friends explore how computer vision systems go awry.
An MIT/IBM system could help artists and designers make quick tweaks to visuals while also helping researchers identify “fake” images.
MIT CSAIL system can learn to see by touching and feel by seeing, suggesting future where robots can more easily grasp and recognize objects.