System automatically writes optimized algorithms to encrypt data in Google Chrome browsers and web applications.
MIT CSAIL system can learn to see by touching and feel by seeing, suggesting future where robots can more easily grasp and recognize objects.
Researchers combine deep learning and symbolic reasoning for a more flexible way of teaching computers to program.
A new tool for predicting a person’s movement trajectory may help humans and robots work together in close proximity.
Simulations suggest photonic chip could run optical neural networks 10 million times more efficiently than its electrical counterparts.
Fleet of “roboats” could collect garbage or self-assemble into floating structures in Amsterdam’s many canals.
Interactive tool lets users see and control how automated model searches work.
Image-translation pioneer discusses the past, present, and future of generative adversarial networks, or GANs.
Researchers submit deep learning models to a set of psychology tests to see which ones grasp key linguistic rules.
Signals help neural network identify objects by touch; system could aid robotics and prosthetics design.
Autonomous control system “learns” to use simple maps and image data to navigate new, complex routes.
CSAIL system can mirror a user's motions and follow nonverbal commands by monitoring arm muscles.
“Metasurfaces” that manipulate light at tiny scales could find uses in cellphone lenses, smart-car sensors, and optical fibers.