Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 1246 - 1260 of 1367 news clips related to this topic.
Show:

Boston Globe

CSAIL researchers recently presented an algorithm that teaches computers to predict sounds, writes Kevin Hartnett for The Boston Globe. The ability to predict sounds will help robots successfully navigate the world and “make sense of what’s in front of them and figure out how to proceed,” writes Hartnett.

HuffPost

Scarlett Ho writes for The Huffington Post about an MIT startup, fireflies.ai, aimed at helping people foster and maintain connections. “All you have to do is forward an email from the contact you wish to keep in touch with to Fireflies, set reminders, add notes, and Fireflies will adapt over time, sending meaningful insights for you.”

Boston Globe

Hae Young Yoo writes for The Boston Globe that Ori, a spinoff out of the MIT Media Lab’s CityHome research project, “is creating furniture for urban spaces -- not just smaller pieces, but smarter ones, equipped with robotics that move on demand.”

Wired

Wired reporter Margaret Rhodes writes that Media Lab spinoff Ori is developing transformable furniture to help maximize living spaces. “With the push of a button—or, with future versions of the software, at the sound of a voice or wave of a hand—pieces of Ori furniture will slide up, down, or over, reconfiguring spaces in mere moments.” 

Wired

April Glasper writes for Wired about the robot Prof. Cynthia Breazeal created specifically for domestic purposes. Glasper explains that robot, dubbed Jibo “learns by listening and asking questions. Jibo uses machine learning, speech and facial recognition, and natural language processing to learn from its interactions with people.”

Wired

By watching TV shows and video clips, CSAIL researchers show that artificially intelligent systems can learn and predict human behavior, writes Tim Moynihan for Wired. Researchers say these findings could lead to analyzing hospital video feeds to alert emergency responders or allow robots to respond.

Forbes

CSAIL researchers used videos of popular TV shows to train an algorithm to predict how two people will greet one another. “[T]he algorithm got it right more than 43 percent of the time, as compared to the shoddier 36 percent accuracy achieved by algorithms without the TV training,” notes Janet Burns in Forbes.

Popular Science

Mary Beth Griggs writes for Popular Science that CSAIL researchers have created an algorithm that can predict human interaction. Griggs explains that the algorithm could “lead to artificial intelligence that is better able to react to humans or even security cameras that could alert authorities when people are in need of help.”

CBC News

Dan Misener writes for CBC News that CSAIL researchers have developed an algorithm that can predict interactions between two people. PhD student Carl Vondrick explains that the algorithm is "learning, for example, that when someone's hand is outstretched, that means a handshake is going to come." 

CNN

CSAIL researchers have trained a deep-learning program to predict interactions between two people, writes Hope King for CNN. “Ultimately, MIT's research could help develop robots for emergency response, helping the robot assess a person's actions to determine if they are injured or in danger,” King explains. 

Wired

In an article for Wired, Tim Moynihan writes that a team of CSAIL researchers has created a machine-learning system that can produce sound effects for silent videos. The researchers hope that the system could be used to “help robots identify the materials and physical properties of an object by analyzing the sounds it makes.”

FT- Financial Times

Writing for the Financial Times, Clive Cookson reports that MIT researchers have developed an artificial intelligence system capable of producing realistic sounds for silent movies. Cookson explains that another application for the system could be “to help robots understand objects’ physical properties and interact better with their surroundings." 

The Washington Post

Washington Post reporter Matt McFarland writes that MIT researchers have created an algorithm that can produce realistic sounds. “The findings are an example of the power of deep learning,” explains McFarland. “With deep learning, a computer system learns to recognize patterns in huge piles of data and applies what it learns in useful ways.”

Popular Science

Popular Science reporter Mary Beth Griggs writes that MIT researchers have developed an algorithm that can learn how to predict sound. The algorithm “can watch a silent movie and create sounds that go along with the motions on screen. It's so good, it even fooled people into thinking they were actual, recorded sounds from the environment.”

The Wall Street Journal

Prof. Andrew Lo writes for The Wall Street Journal that robo advisors could prove helpful to investors if they are able to assist with managing emotions. “Instead of artificial intelligence, we should first conquer artificial emotion—by constructing algorithms that accurately capture human behavior, we can build countermeasures to protect us from ourselves."