Skip to content ↓

Topic

McGovern Institute

Download RSS feed: News Articles / In the Media

Displaying 1 - 15 of 129 news clips related to this topic.
Show:

HealthDay News

A study by MIT researchers finds that the screening test used for autism creates a gender gap that impedes diagnosis and treatment for women and girls, reports Sydney Murphy for Health Day. The researchers found that “a screening test often used to decide who can take part in autism studies seems to exclude a much higher percentage of women than men,” writes Murphy.

The Hill

A new study by MIT researchers finds that women being excluded from studies on autism can hinder diagnoses and the development of useful interventions for women and girls, reports Gianna Melillo for The Hill. “Female diagnoses could be missed altogether and an already small pool of study subjects is further reduced,” writes Melillo.

Economist

Prof. Edward Boyden has developed a new imaging technique called expansion-revealing microscopy that can reveal tiny protein structures in tissues, reports The Economist. “Already his team at MIT has used it to reveal detail in synapses, the nanometer-sized junctions between nerve cells, and also to shed light on the mechanisms at play in Alzheimer’s disease, revealing occasional spirals of amyloid-beta protein around axons, which are the threadlike parts of nerve cells that carry electrical impulses.”

The Guardian

Researchers at MIT have discovered that pictures of food appear to stimulate strong reactions among specific sets of neurons in the human brain, a trait that could have evolved due to the importance of food for humans, reports Sascha Pare for The Guardian. “The researchers posit these neurons have gone undetected because they are spread across the other specialized cluster for faces, places, bodies and words, rather than concentrated in one region,” writes Pare.

The Conversation

Graduate student Anna Ivanova and University of Texas at Austin Professor Kyle Mahowald, along with Professors Evelina Fedorenko, Joshua Tenenbaum and Nancy Kanwisher, write for The Conversation that even though AI systems may be able to use language fluently, it does not mean they are sentient, conscious or intelligent. “Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought,” they write.

The Daily Beast

MIT researchers have developed a new computational model that could be used to help explain differences in how neurotypical adults and adults with autism recognize emotions via facial expressions, reports Tony Ho Tran for The Daily Beast. “For visual behaviors, the study suggests that [the IT cortex] pays a strong role,” says research scientist Kohitij Kar. “But it might not be the only region. Other regions like amygdala have been implicated strongly as well. But these studies illustrate how having good [AI models] of the brain will be key to identifying those regions as well.”

Smithsonian Magazine

Smithsonian Magazine reporter Margaret Osborne spotlights MIT researchers who have discovered that specific neurons in the brain respond to singing, but not sounds such as road traffic, instrumental music and speaking. “This work suggests there’s a distinction in the brain between instrumental music and vocal music,” says former MIT postdoc Sam Norman-Haignere.

Science

Prof. Mircea Dincǎ, Prof. Evelyn Ning-Yi Wang, Prof. Ian W. Hunter, Prof. Guoping Feng, and Senior Research Scientist David H. Shoemaker were elected as Fellows of AAAS for their efforts on behalf of the advancement of science and its applications to better serve society, reports Science.

NPR

A new study by MIT researchers provides evidence that babies and toddlers understand people have a close relationship if they are willing to share saliva via sharing food or kissing, reports Nell Greenfieldboyce for NPR. "From a really young age, without much experience at all with these things, infants are able to understand not only who is connected but how they are connected," says postdoc Ashley Thomas. "They are able to distinguish between different kinds of cooperative relationships."

Stat

MIT scientists have discovered that infants use saliva sharing as a cue in distinguishing close relationships, reports Andrew Joseph for STAT. “Saliva-sharing interactions provide externally observable cues of thick relationships, and young humans can use these cues to make predictions about subsequent social interactions,” the researchers explain.

Science

Science reporter Bridget Alex spotlights a new study by MIT researchers that finds children as young as 8-months-old can infer the social significance in swapping saliva with those they are closely bonded with. This is a “big step in this new science of what preverbal infants already know about human sociality,” explains Prof. Alan Fiske of the University of California, Los Angeles.

Scientific American

Scientific American reporter Dana G. Smith spotlights how Prof. Rebecca Saxe and her colleagues have found evidence that regions of the visual infant cortex show preferences for faces, bodies and scenes. “The big surprise of these results is that specialized area for seeing faces that some people speculated took years to develop: we see it in these babies who are, on average, five or six months old,” Saxe tells Smith. 

Popular Science

Popular Science reporter Charlotte Hu writes that MIT researchers have simulated an environment in which socially-aware robots are able to choose whether they want to help or hinder one another, as part of an effort to help improve human-robot interactions. “If you look at the vast majority of what someone says during their day, it has to do with what other [people] want, what they think, getting what that person wants out of another [person],” explains research scientist Andrei Barbu. “And if you want to get to the point where you have a robot inside someone’s home, understanding social interactions is incredibly important.”

TechCrunch

MIT researchers have developed a new machine learning system that can help robots learn to perform certain social interactions, reports Brian Heater for TechCrunch. “Researchers conducted tests in a simulated environment, to develop what they deemed ‘realistic and predictable’ interactions between robots,” writes Heater. “In the simulation, one robot watches another perform a task, attempts to determine the goal and then either attempts to help or hamper it in that task.”

Axios

Axios reporter Alison Snyder writes that a new study by MIT researchers demonstrates how AI algorithms could provide insight into the human brain’s processing abilities. The researchers found “Predicting the next word someone might say — like AI algorithms now do when you search the internet or text a friend — may be a key part of the human brain's ability to process language,” writes Snyder.