Skip to content ↓

Topic

Brain and cognitive sciences

Download RSS feed: News Articles / In the Media / Audio

Displaying 151 - 165 of 507 news clips related to this topic.
Show:

GBH

Graduate student Olumakinde “Makinde” Ogunnaike and Josh Sariñana PhD ’11 join Boston Public Radio to discuss The Poetry of Science, an initiative that brought together artists and scientists of color to help translate complex scientific research through art and poetry. “Science is often a very difficult thing to penetrate,” says Sariñana. “I thought poetry would be a great way to translate the really abstract concepts into more of an emotional complexity of who the scientists actually are.”

The Boston Globe

Writing for The Boston Globe, Prof. Li-Huei Tsai underscores the need for the Alzheimer’s research community to “acknowledge the gaps in the current approach to curing the disease and make significant changes in how science, technology, and industry work together to meet this challenge.” Tsai adds: “With a more expansive mode of thinking, we can bridge the old innovation gaps and cross new valleys of discovery to deliver meaningful progress toward the end of Alzheimer’s.”

Wired

Wired reporter Adam Rogers spotlights Prof. Nancy Kanwisher’s research on the fusiform face area, which becomes active when a person sees a face, and what would happen if the area were intentionally activated.  Kanwisher’s experiment “certainly suggested the possibility, the power, of jacking directly into the brain,” writes Rogers.  

Scientific American

Scientific American reporter Dana G. Smith spotlights how Prof. Rebecca Saxe and her colleagues have found evidence that regions of the visual infant cortex show preferences for faces, bodies and scenes. “The big surprise of these results is that specialized area for seeing faces that some people speculated took years to develop: we see it in these babies who are, on average, five or six months old,” Saxe tells Smith. 

Naked Scientists

The Naked Scientist podcaster Verner Viisainen spotlights how MIT researchers studied vector-based navigation in humans. “What we discovered is actually that we don’t follow the shortest path but actually follow a different kind of optimization criteria which is based on angular deviation,” says Prof. Carlo Ratti.

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have simulated an environment in which socially-aware robots are able to choose whether they want to help or hinder one another, as part of an effort to help improve human-robot interactions. “If you look at the vast majority of what someone says during their day, it has to do with what other [people] want, what they think, getting what that person wants out of another [person],” explains research scientist Andrei Barbu. “And if you want to get to the point where you have a robot inside someone’s home, understanding social interactions is incredibly important.”

TechCrunch

MIT researchers have developed a new machine learning system that can help robots learn to perform certain social interactions, reports Brian Heater for TechCrunch. “Researchers conducted tests in a simulated environment, to develop what they deemed ‘realistic and predictable’ interactions between robots,” writes Heater. “In the simulation, one robot watches another perform a task, attempts to determine the goal and then either attempts to help or hamper it in that task.”

TechCrunch

TechCrunch writer Devin Coldewey reports on the ReSkin project, an AI project focused on developing a new electronic skin and fingertip meant to expand the sense of touch in robots. The ReSkin project is rooted in GelSight, a technology developed by MIT researchers that allows robots to gauge an object’s hardness.

Axios

Axios reporter Alison Snyder writes that a new study by MIT researchers demonstrates how AI algorithms could provide insight into the human brain’s processing abilities. The researchers found “Predicting the next word someone might say — like AI algorithms now do when you search the internet or text a friend — may be a key part of the human brain's ability to process language,” writes Snyder.

Scientific American

Using an integrative modeling technique, MIT researchers compared dozens of machine learning algorithms to brain scans as part of an effort to better understand how the brain processes language. The researchers found that “neural networks and computational science might, in fact, be critical tools in providing insight into the great mystery of how the brain processes information of all kinds,” writes Anna Blaustein for Scientific American.

National Public Radio (NPR)

Prof. Mark Bear speaks with NPR’s Jon Hamilton about how injecting tetrodotoxin, a paralyzing nerve toxin found in puffer fish, could allow the brain to rewire in a way that restores vision and help adults with amblyopia or "lazy eye." Bear explains that: “Unexpectedly, in many cases vision recovered in the amblyopic eye, showing that that plasticity could be restored even in the adult.”

NPR

NPR’s Jon Hamilton spotlights Prof. Li-Huei Tsai’s work developing a noninvasive technique that uses lights and sounds aimed at boosting gamma waves and potentially slowing progression of Alzheimer’s disease. "This is completely noninvasive and could really change the way Alzheimer's disease is treated," Tsai says.

Scientific American

Writing for Scientific American, Pamela Feliciano spotlights how a study by Prof. Pawan Sinha examined the predictive responses of people with autism. Sinha found that people with ASD had very different responses to a highly regular sequence of tones played on a metronome than those without ASD. While people without ASD ‘habituate’ to the sequence of regular tones; people with ASD do not acclimate to the sounds over time.”

The Wall Street Journal

Wall Street Journal reporters Angus Loten and Kevin Hand spotlight how MIT researchers are developing robots with humanlike senses that will be able to assist with a range of tasks. GelSight, a technology developed by CSAIL researchers, outfits robot arms with a small gel pad that can be pressed into objects to sense their size and texture, while another team of researchers is “working to bridge the gap between touch and sight by training an AI system to predict what a seen object feels like and what a felt object looks like.”

New Scientist

In an interview with Clare Wilson of New Scientist, Prof. Ed Boyden, one of the co-inventors of the field of optogenetics, discusses how the technique was used to help partially restore vision for a blind patient. “It’s exciting to see the first publication on human optogenetics,” says Boyden.