Skip to content ↓

Topic

Center for Brains Minds and Machines

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 14 of 14 news clips related to this topic.
Show:

Scientific American

Researchers from MIT and elsewhere have developed a new AI technique for teaching robots to pack items into a limited space while adhering to a range of constraints, reports Nick Hilden for Scientific American. “We want to have a learning-based method to solve constraints quickly because learning-based [AI] will solve faster, compared to traditional methods,” says graduate student Zhutian “Skye” Yang.

CBS

Scientists at MIT have found that specific neurons in the human brain light up whenever we see images of food, reports Dr. Mallika Marshall for CBS Boston. “The researchers now want to explore how people’s responses to certain foods might differ depending on their personal preferences, likes and dislikes and past experiences,” Marshall.

The Guardian

Researchers at MIT have discovered that pictures of food appear to stimulate strong reactions among specific sets of neurons in the human brain, a trait that could have evolved due to the importance of food for humans, reports Sascha Pare for The Guardian. “The researchers posit these neurons have gone undetected because they are spread across the other specialized cluster for faces, places, bodies and words, rather than concentrated in one region,” writes Pare.

The Daily Beast

MIT researchers have developed a new computational model that could be used to help explain differences in how neurotypical adults and adults with autism recognize emotions via facial expressions, reports Tony Ho Tran for The Daily Beast. “For visual behaviors, the study suggests that [the IT cortex] pays a strong role,” says research scientist Kohitij Kar. “But it might not be the only region. Other regions like amygdala have been implicated strongly as well. But these studies illustrate how having good [AI models] of the brain will be key to identifying those regions as well.”

The New York Times

In an article for The New York Times exploring whether humans are the only species able to comprehend geometry, Siobhan Roberts spotlights Prof. Josh Tenenbaum’s approach to exploring how humans can extract so much information from minimal data, time, and energy. “Instead of being inspired by simple mathematical ideas of what a neuron does, it’s inspired by simple mathematical ideas of what thinking is,” says Tenenbaum.

Smithsonian Magazine

Smithsonian Magazine reporter Margaret Osborne spotlights MIT researchers who have discovered that specific neurons in the brain respond to singing, but not sounds such as road traffic, instrumental music and speaking. “This work suggests there’s a distinction in the brain between instrumental music and vocal music,” says former MIT postdoc Sam Norman-Haignere.

Scientific American

Scientific American reporter Dana G. Smith spotlights how Prof. Rebecca Saxe and her colleagues have found evidence that regions of the visual infant cortex show preferences for faces, bodies and scenes. “The big surprise of these results is that specialized area for seeing faces that some people speculated took years to develop: we see it in these babies who are, on average, five or six months old,” Saxe tells Smith. 

Naked Scientists

The Naked Scientist podcaster Verner Viisainen spotlights how MIT researchers studied vector-based navigation in humans. “What we discovered is actually that we don’t follow the shortest path but actually follow a different kind of optimization criteria which is based on angular deviation,” says Prof. Carlo Ratti.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have simulated an environment in which socially-aware robots are able to choose whether they want to help or hinder one another, as part of an effort to help improve human-robot interactions. “If you look at the vast majority of what someone says during their day, it has to do with what other [people] want, what they think, getting what that person wants out of another [person],” explains research scientist Andrei Barbu. “And if you want to get to the point where you have a robot inside someone’s home, understanding social interactions is incredibly important.”

TechCrunch

MIT researchers have developed a new machine learning system that can help robots learn to perform certain social interactions, reports Brian Heater for TechCrunch. “Researchers conducted tests in a simulated environment, to develop what they deemed ‘realistic and predictable’ interactions between robots,” writes Heater. “In the simulation, one robot watches another perform a task, attempts to determine the goal and then either attempts to help or hamper it in that task.”

Scientific American

Using an integrative modeling technique, MIT researchers compared dozens of machine learning algorithms to brain scans as part of an effort to better understand how the brain processes language. The researchers found that “neural networks and computational science might, in fact, be critical tools in providing insight into the great mystery of how the brain processes information of all kinds,” writes Anna Blaustein for Scientific American.

TechCrunch

TechCrunch reporter Taylor Hatmaker writes that MIT researchers will led a new NSF-funded research institute focused on AI and physics.

The Wall Street Journal

Researchers from MIT's Laboratory for Nuclear Science will lead a new research institute focused on advancing knowledge of physics and AI, reports Jared Council for The Wall Street Journal. The new research institute is part of an effort “designed to ensure the U.S. remains globally competitive in AI and quantum technologies.”

United Press International (UPI)

A new study by MIT researchers finds that the inferior temporal cortex region of the brain was repurposed to recognize letters and words, providing humans the ability to read, reports Brooks Hays for UPI.