BBC
Paul Carter of BBC’s Click highlights CSAIL research to teach a robot how to feel an object just by looking at it. This will ultimately help the robot “grip better when lifting things like the handle of a mug,” says Carter.
Paul Carter of BBC’s Click highlights CSAIL research to teach a robot how to feel an object just by looking at it. This will ultimately help the robot “grip better when lifting things like the handle of a mug,” says Carter.
Gizmodo reporter Victoria Song writes that MIT researchers have developed a new system that can teach a machine how to make pizza by examining a photograph. “The researchers set out to teach machines how to recognize different steps in cooking by dissecting images of pizza for individual ingredients,” Song explains.
Using a tactile sensor and web camera, MIT researchers developed an AI system that allows robots to predict what something feels like just by looking at it, reports David Williams for CNN. “This technology could be used to help robots figure out the best way to hold an object just by looking at it,” explains Williams.
Forbes contributor Charles Towers-Clark explores how CSAIL researchers have developed a database of tactile and visual information that could be used to allow robots to infer how different objects look and feel. “This breakthrough could lead to far more sensitive and practical robotic arms that could improve any number of delicate or mission-critical operations,” Towers-Clark writes.
MIT researchers have created a new system that enables robots to identify objects using tactile information, reports Darrell Etherington for TechCrunch. “This type of AI also could be used to help robots operate more efficiently and effectively in low-light environments without requiring advanced sensors,” Etherington explains.
Fast Company reporter Michael Grothaus writes that CSAIL researchers have developed a new system that allows robots to determine what objects look like by touching them. “The breakthrough could ultimately help robots become better at manipulating objects,” Grothaus explains.
TechCrunch reporter Darrell Etherington writes that MIT researchers have developed a system that can predict a perso's trajectory. The tool could allow “robots that typically freeze in the face of anything even vaguely resembling a person walking in their path to continue to operate and move around the flow of human foot traffic."
During a Times Higher Ed summit, Prof. Shigeru Miyagawa, senior associate dean for open learning, emphasized the importance of integrating attention to ethical implications into AI education, reports Paul Baskin. “My plan is to educate a new generation of young people who will have intuition behind computational thinking,” says Miyagawa.
Mashable highlights how MIT researchers have developed a new system of computationally simple robots inspired by biological cells that can connect in large groups to move around, transport objects and complete tasks. Mashable explains that robots made up of simplistic components, “could enable more scalable, flexible and robust systems.”
Motherboard reporter Rob Dozier writes about Glitch, an MIT startup that uses machine learning to design clothing. “These tools are meant to empower human designers,” explains graduate student Emily Salvador. “What I think is really cool about these creative-focused AI tools is that there’s still this really compelling need for a human to intervene with the algorithm.”
Researchers from MIT and a number of other institutions have found that grammar-enriched deep learning models had a better understanding of key linguistic rules, reports Kyle Wiggers for VentureBeat. The researchers found that an AI system provided with knowledge of basic grammar, “consistently performed better than systems trained on little-to-no grammar using a fraction of the data, and that it could comprehend ‘fairly sophisticated’ rules.”
In this video, Mashable highlights how CSAIL researchers have developed a new system that can help lift heavy objects by mirroring human activity. The system uses sensors that monitor muscle activity and detect changes in the user’s arm.
In an article for Gizmodo, Dell Cameron writes that graduate student Joy Buolamwini testified before Congress about the inherent biases of facial recognition systems. Buolamwini’s research on face recognition tools “identified a 35-percent error rate for photos of darker skinned women, as opposed to database searches using photos of white men, which proved accurate 99 percent of the time.”
Wired reporter Lily Hay Newman highlights graduate student Joy Buolamwini’s Congressional testimony about the bias of facial recognition systems. “New research is showing bias in the use of facial analysis technology for health care purposes, and facial recognition is being sold to schools,” said Buolamwini. “Our faces may well be the final frontier of privacy.”
Popular Science reporter Rob Verger writes that MIT researchers have developed a new mechanical system that can help humans lift heavy objects. “Overall the system aims to make it easier for people and robots to work together as a team on physical tasks,” explains graduate student Joseph DelPreto.