Skip to content ↓

Topic

Computer vision

Download RSS feed: News Articles / In the Media / Audio

Displaying 46 - 60 of 171 news clips related to this topic.
Show:

Boston Herald

Researchers from MIT and Brigham and Women’s Hospital have repurposed a robotic dog from Boston Dynamics with technology that enables doctors to remotely measure a patient’s vital signs, reports Rick Sobey for The Boston Herald. “Using four cameras mounted on the dog-like robot, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate and blood oxygen saturation in healthy patients,” writes Sobey.

ZDNet

A new tool developed by MIT researchers sheds light on the operations of generative adversarial network models and allows users to edit these machine learning models to generate new images, reports Daphne Leprince-Ringuet for ZDNet. "The real challenge I'm trying to breach here," says graduate student David Bau, "is how to create models of the world based on people's imagination."

The Verge

Verge reporter James Vincent writes that researchers at the MIT-IBM Watson AI Lab have developed an algorithm that can transform selfies into artistic portraits. The algorithm is “trained on 45,000 classical portraits to render your face in faux oil, watercolor, or ink, “Vincent explains.

BBC

Paul Carter of BBC’s Click highlights CSAIL research to teach a robot how to feel an object just by looking at it. This will ultimately help the robot “grip better when lifting things like the handle of a mug,” says Carter.

Gizmodo

Gizmodo reporter Victoria Song writes that MIT researchers have developed a new system that can teach a machine how to make pizza by examining a photograph. “The researchers set out to teach machines how to recognize different steps in cooking by dissecting images of pizza for individual ingredients,” Song explains.

CNN

Using a tactile sensor and web camera, MIT researchers developed an AI system that allows robots to predict what something feels like just by looking at it, reports David Williams for CNN. “This technology could be used to help robots figure out the best way to hold an object just by looking at it,” explains Williams.

Forbes

Forbes contributor Charles Towers-Clark explores how CSAIL researchers have developed a database of tactile and visual information that could be used to allow robots to infer how different objects look and feel. “This breakthrough could lead to far more sensitive and practical robotic arms that could improve any number of delicate or mission-critical operations,” Towers-Clark writes.

TechCrunch

MIT researchers have created a new system that enables robots to identify objects using tactile information, reports Darrell Etherington for TechCrunch. “This type of AI also could be used to help robots operate more efficiently and effectively in low-light environments without requiring advanced sensors,” Etherington explains.

Fast Company

Fast Company reporter Michael Grothaus writes that CSAIL researchers have developed a new system that allows robots to determine what objects look like by touching them. “The breakthrough could ultimately help robots become better at manipulating objects,” Grothaus explains.

Gizmodo

Gizmodo reporter Andrew Liszewski writes that MIT researchers have created an algorithm that can automatically fix warped faces in wide-angle shots without impacting the rest of the photo. Liszewski writes that the tool could “be integrated into a camera app and applied to wide angle photos on the fly as the algorithm is fast enough on modern smartphones to provide almost immediate results.”

Wired

Wired reporter Lily Hay Newman highlights graduate student Joy Buolamwini’s Congressional testimony about the bias of facial recognition systems. “New research is showing bias in the use of facial analysis technology for health care purposes, and facial recognition is being sold to schools,” said Buolamwini. “Our faces may well be the final frontier of privacy.” 

Science

MIT researchers have identified a method to help AI systems avoid adversarial attacks, reports Matthew Hutson for Science. When the researchers “trained an algorithm on images without the subtle features, their image recognition software was fooled by adversarial attacks only 50% of the time,” Hutson explains. “That compares with a 95% rate of vulnerability when the AI was trained on images with both obvious and subtle patterns.”

Wired

Researchers at MIT have found that adversarial examples, a kind of optical illusion for AI that makes the system incorrectly identify an image, may not actually impact AI in the ways computer scientists have previously thought. “When algorithms fall for an adversarial example, they’re not hallucinating—they’re seeing something that people don’t,” Louise Matsakis writes for Wired.

Wired

A study by MIT researchers examining adversarial images finds that AI systems pick up on tiny details in images that are imperceptible to the human eye, which can lead to misidentification of objects, reports Louise Matsakis for Wired.  “It’s not something that the model is doing weird, it’s just that you don’t see these things that are really predictive,” says graduate student Shibani Santurkar.

Boston Herald

Boston Herald reporter Jordan Graham writes that MIT researchers have developed an autonomous system that allows fleets of drones to navigate without GPS and could be used to help find missing hikers. “What we’re trying to do is automate the search part of the search-and-rescue problem with a fleet of drones,” explains graduate student Yulun Tian.