Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 796 - 810 of 830 news clips related to this topic.
Show:

Boston Globe

Researchers from MIT and IBM are joining forces to develop systems that enable machines to recognize images and sounds as people do, reports Hiawatha Bray for The Boston Globe. James DiCarlo, head of the Department of Brain and Cognitive Sciences, notes that as researchers build systems that can interpret events, “we learn ways our own brains might be doing that.”

Fox News

MIT researchers are studying the possibility of developing autonomous boats and floating vessels, writes Stephanie Mlot in a Fox News article. The research, which is being conducted in collaboration with the Amsterdam Institute for Advanced Metropolitan Solutions, “aims to serve as an inspiration for urban areas around the globe.”

New York Times

Writing for The New York Times, Steve Lohr features Prof. Tomaso Poggio’s work “building computational models of the visual cortex of the brain, seeking to digitally emulate its structure, even how it works and learns from experience.” Lohr notes that efforts like Poggio’s could lead to breakthroughs in computer vision and machine learning. 

The Wall Street Journal

Prof. Ramesh Raskar has been awarded the Lemelson-MIT prize for his “trailblazing work which includes the co-invention of an ultra-fast imaging camera that can see around corners, low-cost eye-care solutions and a camera that enables users to read the first few pages of a book without opening the cover,” writes Krishna Pokharel for The Wall Street Journal

Popular Science

MIT researchers have developed a new algorithm to create videos from still images, writes G. Clay Whittaker for Popular Science. “The system "learns" types of videos (beach, baby, golf swing...) and, starting from still images, replicates the movements that are most commonly seen in those videos,” Whittaker explains. 

Boston Globe

MIT researchers have developed a database of annotated English words written by non-native English speakers, reports Kevin Hartnett for The Boston Globe. The database will provide “a platform for the study of learner English and also make it easier to develop technology like better search engines that supports non-native speakers.”

HuffPost

In an article for The Huffington Post about why virtual assistants have trouble understanding accents, Philip Ellis highlights how researchers from MIT have compiled a database of written English composed by non-native speakers. Ellis explains that the aim is "to create a richer context for machine learning” systems.

Boston Globe

CSAIL researchers recently presented an algorithm that teaches computers to predict sounds, writes Kevin Hartnett for The Boston Globe. The ability to predict sounds will help robots successfully navigate the world and “make sense of what’s in front of them and figure out how to proceed,” writes Hartnett.

Forbes

Associate Prof. Scott Aaronson answers the question “Is machine learning currently overhyped?” for Forbes. “I suppose it’s less interesting to me to look at the sheer amount of machine learning hype than at its content. Almost everyone in the 1950s knew that computers were going to be important, but they were often wildly wrong about the reasons,” he writes.

Wired

By watching TV shows and video clips, CSAIL researchers show that artificially intelligent systems can learn and predict human behavior, writes Tim Moynihan for Wired. Researchers say these findings could lead to analyzing hospital video feeds to alert emergency responders or allow robots to respond.

Forbes

CSAIL researchers used videos of popular TV shows to train an algorithm to predict how two people will greet one another. “[T]he algorithm got it right more than 43 percent of the time, as compared to the shoddier 36 percent accuracy achieved by algorithms without the TV training,” notes Janet Burns in Forbes.

Popular Science

Mary Beth Griggs writes for Popular Science that CSAIL researchers have created an algorithm that can predict human interaction. Griggs explains that the algorithm could “lead to artificial intelligence that is better able to react to humans or even security cameras that could alert authorities when people are in need of help.”

CBC News

Dan Misener writes for CBC News that CSAIL researchers have developed an algorithm that can predict interactions between two people. PhD student Carl Vondrick explains that the algorithm is "learning, for example, that when someone's hand is outstretched, that means a handshake is going to come." 

CNN

CSAIL researchers have trained a deep-learning program to predict interactions between two people, writes Hope King for CNN. “Ultimately, MIT's research could help develop robots for emergency response, helping the robot assess a person's actions to determine if they are injured or in danger,” King explains. 

Wired

In an article for Wired, Tim Moynihan writes that a team of CSAIL researchers has created a machine-learning system that can produce sound effects for silent videos. The researchers hope that the system could be used to “help robots identify the materials and physical properties of an object by analyzing the sounds it makes.”