Teaching artificial intelligence to connect senses like vision and touch
MIT CSAIL system can learn to see by touching and feel by seeing, suggesting future where robots can more easily grasp and recognize objects.
MIT CSAIL system can learn to see by touching and feel by seeing, suggesting future where robots can more easily grasp and recognize objects.
MIT startup Inkbit is overcoming traditional constraints to 3-D printing by giving its machines “eyes and brains.”
The DiCarlo lab finds that a recurrent architecture helps both artificial intelligence and our brains to better identify objects.
Method could illuminate features of biological tissues in low-exposure images.
System allows drones to cooperatively explore terrain under thick forest canopies where GPS signals are unreliable.
Computer model could improve human-machine interaction, provide insight into how children learn language.
Advances in computer vision inspired by human physiological and anatomical constraints are improving pattern completion in machines.
CSAIL system could help athletes, dancers, and others better analyze how they move.
Model learns to pick out objects within an image, using spoken descriptions.
Machine learning system efficiently recognizes activities by observing how objects change in only a few key frames.
Breakthrough CSAIL system suggests robots could one day be able to see well enough to be useful in people’s homes and offices.
AeroAstro grad students win multi-university challenge by demonstrating the utility of machine vision in a complex system.
Given a video of a musical performance, CSAIL’s deep-learning system can make individual instruments louder or softer.
Activity simulator could eventually teach robots tasks like making coffee or setting the table.
With new system, drones navigate through an empty room, avoiding crashes while “seeing” a virtual world.