Teaching artificial intelligence to create visuals with more common sense
An MIT/IBM system could help artists and designers make quick tweaks to visuals while also helping researchers identify “fake” images.
An MIT/IBM system could help artists and designers make quick tweaks to visuals while also helping researchers identify “fake” images.
General-purpose language works for computer vision, robotics, statistics, and more.
Researchers combine deep learning and symbolic reasoning for a more flexible way of teaching computers to program.
Image-translation pioneer discusses the past, present, and future of generative adversarial networks, or GANs.
Researchers submit deep learning models to a set of psychology tests to see which ones grasp key linguistic rules.
MIT CSAIL project shows the neural nets we typically train contain smaller “subnetworks” that can learn just as well, and often faster.
Researchers unveil a tool for making compressed deep learning models less vulnerable to attack.
Model improves a robot’s ability to mold materials into shapes and interact with liquids and solid objects.
Researchers combine statistical and symbolic artificial intelligence techniques to speed learning and improve transparency.
Research projects show creative ways MIT students are connecting computing to other fields.
Computer model could improve human-machine interaction, provide insight into how children learn language.
Community event generates ideas for sparking innovative and ambitious plans to advance research in human and machine intelligence.
Inaugural director of The Quest discusses what's been accomplished since last spring's launch and what is on the horizon.
Model learns to pick out objects within an image, using spoken descriptions.
Up to 100 Quest-funded UROP projects aim to crack the code of human and machine intelligence.