Popular Mechanics
The MIT mini cheetah broke a speed record after learning to adapt to difficult terrain and upping its speed, reports Rienk De Beer for Popular Mechanics.
The MIT mini cheetah broke a speed record after learning to adapt to difficult terrain and upping its speed, reports Rienk De Beer for Popular Mechanics.
A new study from researchers at MIT and Dartmouth suggests that the speed of automation should be halved, reports Adi Gaskell for Forbes. Their paper showed that “while investments in automation usually result in higher productivity in firms, and therefore often more employment, it can be harmful to those who are displaced, especially if they have few alternative options,” writes Gaskell.
Postdoctoral researcher Murat Onen and his colleagues have created “a nanoscale resistor that transmits protons from one terminal to another,” reports Alex Wilkins for New Scientist. “The resistor uses powerful electric fields to transport protons at very high speeds without damaging or breaking the resistor itself, a problem previous solid-state proton resistors had suffered from,” explains Wilkins.
Butlr, spinout founded by researchers from the MIT Media Lab, is developing sensors that utilize body heat to estimate office occupancy, reports Kyle Wiggers for TechCrunch. The new technology “uses thermal sensing AI to provide data on space occupancy and historical activity,” writes Wiggers.
MIT researchers have developed a new system that enabled the mini robotic cheetah to learn to run, reports John Koetsier for Forbes. ““Traditionally, the process that people have been using [to train robots] requires you to study the actual system and manually design models,” explains Prof. Pulkit Agrawal. “This process is good, it’s well established, but it’s not very scalable. “But we are removing the human from designing the specific behaviors.”
Wall Street Journal reporter Daniela Hernandez spotlights the work of Media Lab Research Scientist Andreas Mershin in developing sensors that can detect and analyze odors. Mershin “is focusing on medical applications of olfaction technology. Inspired by dogs that have demonstrated an ability to sniff out malignancies in humans, he’s working on an artificial-intelligence odor-detection system to detect prostate cancer.”
Principal Research Scientist Leo Anthony Celi oversaw a study which found that people of color were given significantly less supplemental oxygen than white people because of inaccuracies in pulse oximeter readings, reports Nancy Lapid for Reuters. “Nurses and doctors make the wrong decisions and end up giving less oxygen to people of color because they are fooled [by incorrect readings from pulse oximeters],” says Celi.
Researchers at MIT have created a knit textile containing pressure sensors called 3DKnITS which can be used to predict a person’s movements, reports Charlotte Hu for Popular Science. “Smart textiles that can sense how users are moving could be useful in healthcare, for example, for monitoring gait or movement after an injury,” writes Hu.
A study co-authored by MIT researchers finds that algorithms based on clinical medical notes can predict the self-identified race of a patient, reports Katie Palmer for STAT. “We’re not ready for AI — no sector really is ready for AI — until they’ve figured out that the computers are learning things that they’re not supposed to learn,” says Principal Research Scientist Leo Anthony Celi.
Ken Knowlton PhD ’62 - a pioneer in the science and art of computer graphics and the creator of some of the first computer-generated pictures, portraits and movies - died June 16 at the age of 91, reports Cade Metz for The New York Times. “Knowlton was the only person to ever use the BEFLIX language – he and his colleagues quickly replaced it with other tools and techniques – the ideas behind this technology would eventually overhaul the movie business,” writes Metz.
CSAIL graduate student Yunzhu Li and his colleagues have trained a robot to use two metal grippers to mold letters out of play dough, reports Jeremy Hsu for New Scientist. "Li and his colleagues trained a robot to use two metal grippers to mould the approximate shapes of the letters B, R, T, X and A out of Play-Doh," explains Hsu. "The training involved just 10 minutes of randomly manipulating a block of the modelling clay beforehand, without requiring any human demonstrations."
Graduate student Anna Ivanova and University of Texas at Austin Professor Kyle Mahowald, along with Professors Evelina Fedorenko, Joshua Tenenbaum and Nancy Kanwisher, write for The Conversation that even though AI systems may be able to use language fluently, it does not mean they are sentient, conscious or intelligent. “Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought,” they write.
TechCrunch reporter Brian Heater spotlights multiple MIT research projects, including MIT Space Exploration Initiative’s TESSERAE, CSAIL’s Robocraft and the recent development of miniature flying robotic drones.
Prof. Pattie Maes, and graduate students Valdemar Danry, Joanne Leong and Pat Pataranutaporn speak with Forbes reporter Stephen Ibaraki about their work in the MIT Media Lab Fluid Interfaces research group. “Their highly interdisciplinary work covering decades of MIT Lab pioneering inventions integrates human computer interaction (HCI), sensor technologies, AI / machine learning, nano-tech, brain computer interfaces, design and HCI, psychology, neuroscience and much more,” writes Ibaraki.
MIT researchers have developed a new computational model that could be used to help explain differences in how neurotypical adults and adults with autism recognize emotions via facial expressions, reports Tony Ho Tran for The Daily Beast. “For visual behaviors, the study suggests that [the IT cortex] pays a strong role,” says research scientist Kohitij Kar. “But it might not be the only region. Other regions like amygdala have been implicated strongly as well. But these studies illustrate how having good [AI models] of the brain will be key to identifying those regions as well.”