Skip to content ↓

Topic

MIT Schwarzman College of Computing

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 383 news clips related to this topic.
Show:

Smithsonian Magazine

MIT researchers have used advancements in machine learning and computing to help decode whale vocalizations, reports Sarah Kuta of Smithsonian Magazine. “If researchers knew what sperm whales were saying, they might be able to come up with more targeted approaches to protecting them,” Kuta explains. “In addition, drawing parallels between whales and humans via language might help engage the broader public in conservation efforts.”

Reuters

A new analysis of years of vocalizations by sperm whales in the eastern Caribbean has provided a fuller understanding of how whales communicate using codas, reports Will Dunham of Reuters. Graduate student Pratyusha Sharma explained that: "The research shows that the expressivity of sperm whale calls is much larger than previously thought."

New Scientist

New Scientist reporter Clare Wilson writes that a new analysis by MIT researchers of thousands of exchanges made by east Caribbean sperm whales demonstrates a communication system more advanced than previously thought. “It’s really extraordinary to see the possibility of another species on this planet having the capacity for communication,” says Prof. Daniela Rus.

TechCrunch

Researchers from MIT and elsewhere have uncovered a phonetic alphabet used by sperm whales, which provides “key breakthroughs in our understanding of cetacean communication,” reports Brain Heater for TechCrunch. “This phonetic alphabet makes it possible to systematically explain the observed variability in the coda structure,” says Prof. Daniela Rus, director of CSAIL. “We believe that it’s possible that this is the first instance outside of human language where a communication provides an example of the linguistic concept of duality of patterning. That refers to a set of individually meaningless elements that can be combined to form larger meaningful units, sort of like combining syllables into words.”

Associated Press

Associated Press reporter Maria Cheng spotlights a new study by MIT researchers that identifies a “phonetic alphabet” used by whales when communicating. “It doesn’t appear that they have a fixed set of codas,” says graduate student Pratyusha Sharma. “That gives the whales access to a much larger communication system.” 

NPR

Using machine learning, MIT researchers have discovered that sperm whales use “a bigger lexicon of sound patterns” that indicates a far more complex communication style than previously thought, reports Lauren Sommers for NPR. “Our results show there is much more complexity than previously believed and this is challenging the current state of the art or state of beliefs about the animal world," says Prof. Daniela Rus, director of CSAIL. 

New York Times

MIT researchers have discovered that sperm whales use a “much richer set of sounds than previously known, which they call a ‘sperm whale phonetic alphabet,’” reports Carl Zimmer for The New York Times. “The researchers identified 156 different codas, each with distinct combinations of tempo, rhythm, rubato and ornamentation,” Zimmer explains. “This variation is strikingly similar to the way humans combine movements in our lips and tongue to produce a set of phonetic sounds.”

USA Today

Prof. Yoon Kim speaks with USA Today reporter Eve Chen about how AI can be used in everyday tasks such as travel planning. “AI is generally everywhere,” says Kim. “For example, when you search for something – let’s say you search for something on TripAdvisor, Hotels.com – there is likely an AI-based system that gives you a list of matches based on your query.” 

ShareAmerica

ShareAmerica reporter Lauren Monsen spotlights Prof. Dina Katabi for her work in advancing medicine with artificial intelligence. “Katabi develops AI tools to monitor patients’ breathing patterns, hear rate, sleep quality, and movements,” writes Monsen. “This data informs treatment for patients with diseases such as Parkinson’s, Alzheimer’s, Crohn’s, and ALS (amyotrophic lateral sclerosis), as well as Rett syndrome, a rare neurological disorder.”

Interesting Engineering

MIT researchers have developed a machine-learning accelerator chip to make health-monitoring apps more secure, reports Aman Tripathi for Interesting Engineering. “The researchers subjected this new chip to intensive testing, simulating real-world hacking attempts, and the results were impressive,” explains Tripathi. “Even after millions of attempts, they were unable to recover any private information. In contrast, stealing data from an unprotected chip took only a few thousand samples.”

Scientific American

Scientific American reporter Riis Williams explores how MIT researchers created “smart gloves” that have tactile sensors woven into the fabric to help teach piano and make other hands-on activities easier. “Hand-based movements like piano playing are normally really subjective and difficult to record and transfer,” explains graduate student Yiyue Luo. “But with these gloves we are actually able to track one person’s touch experience and share it with another person to improve their tactile learning process.”

Forbes

Prof. Roger Levy, Prof. Tracy Slatyer and Prof. Martin Wainwright are among the 2024 John Simon Guggenheim Foundation Fellowship recipients, reports Michael T. Nietzel for Forbes. “The new fellows represent 52 scholarly disciplines and artistic fields and are affiliated with 84 academic institutions,” writes Nietzel.

The Boston Globe

Prof. Roger Levy, Prof. Tracy Slatyer and Prof. Martin Wainwright have been awarded John Simon Guggenheim Foundation Fellowships, reports Mark Feeney for The Boston Globe. A Guggenheim fellowship “is one of the most sought-after honors in academe, the arts, and culture,” explains Feeney. “It helps underwrite a proposed art or scholarly project.”

TechCrunch

Researchers at MIT have found that large language models mimic intelligence using linear functions, reports Kyle Wiggers for TechCrunch. “Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them,” writes Wiggers. 

The Economist

Prof. Pulkit Agrawal and graduate student Gabriel Margolis speak with The Economist’s Babbage podcast about the simulation research and technology used in developing intelligent machines. “Simulation is a digital twin of reality,” says Agrawal. “But simulation still doesn’t have data, it is a digital twin of the environment. So, what we do is something called reinforcement learning which is learning by trial and error which means that we can try out many different combinations.”