Teaching computers to see — by learning to see like computers
By translating images into the language spoken by object-recognition systems, then translating them back, researchers hope to explain the systems’ failures.
By translating images into the language spoken by object-recognition systems, then translating them back, researchers hope to explain the systems’ failures.
An algorithm that can accurately gauge heart rate by measuring tiny head movements in video data could ultimately help diagnose cardiac disease.
A combination of crowdsourcing and computer vision could identify individuals within endangered populations.
A computerized system developed at MIT can tell the difference between smiles of joy and smiles of frustration.
By helping biologists turn their hunches into rigorous mathematical models, Polina Golland builds software that interprets medical images.
CSAIL associate professor develops AI systems that can interpret images.
Neuroscientist looks forward to collaborative studies of visual perception in the brain and its computational applications.
A simple new imaging system could help manufacturers inspect their products, forensics experts identify weapons and doctors identify cancers.
A new system lets you transfer open applications between a computer and a cellphone simply by pointing the phone’s camera at the computer’s screen.
An algorithm for identifying the boundaries of objects in digital images is 50,000 times more efficient than its predecessor.
Hint: We tend to remember pictures of people much better than wide open spaces.
With a single piece of inexpensive hardware — a multicolored glove — MIT researchers are making Minority Report-style interfaces more accessible.
Object recognition systems that break images into ever smaller parts should be much more efficient and may shed light on how the brain works.