Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media

Displaying 1 - 15 of 334 news clips related to this topic.
Show:

TechCrunch

A new study by MIT researchers finds people are more likely to interact with a smart device if it demonstrates more humanlike attributes, reports Brian Heater for TechCrunch. The researchers found “users are more likely to engage with both the device — and each other — more when it exhibits some form of social cues,” writes Heater. “That can mean something as simple as the face/screen of the device rotating to meet the speaker’s gaze.”

Stat

STAT reporters Katie Palmer and Casey Ross spotlight how Prof. Regina Barzilay has developed an AI tool called Mirai that can identify early signs of breast cancer from mammograms. “Mirai’s predictions were rolled into a screening tool called Tempo, which resulted in earlier detection compared to a standard annual screening,” writes Palmer and Ross.

The Wall Street Journal

In an article for The Wall Street Journal about next generation technologies that can create and quantify personal health data, Laura Cooper spotlights Prof. Dina Katabi’s work developing a noninvasive device that sits in a person’s home and can help track breathing, heart rate, movement, gait, time in bed and the length and quality of sleep. The device “could be used in the homes of seniors and others to help detect early signs of serious medical conditions, and as an alternative to wearables,” writes Cooper.

IEEE Spectrum

IEEE Spectrum reporter Prachi Patel writes that researchers from MIT and Google Brain have developed a new open-source tool that could streamline solar cell improvement and discovery. The new system should “speed up development of more efficient solar cells by allowing quick assessment of a wide variety of possible materials and device structures,” writes Patel.

Good Morning America

Prof. Regina Barzilay speaks with Good Morning America about her work developing a new AI tool that could “revolutionize early breast cancer detection” by identifying patients at high risk of developing the disease. “If this technology is used in a uniform way,” says Barzilay, “we can identify early who are high-risk patients and intervene.”

The Washington Post

Washington Post reporter Steve Zeitchik spotlights Prof. Regina Barzilay and graduate student Adam Yala’s work developing a new AI system, called Mirai, that could transform how breast cancer is diagnosed, “an innovation that could seriously disrupt how we think about the disease.” Zeitchik writes: “Mirai could transform how mammograms are used, open up a whole new world of testing and prevention, allow patients to avoid aggressive treatments and even save the lives of countless people who get breast cancer.”

Stat

STAT reporter Katie Palmer writes that MIT researchers have developed a new machine learning model that can "flag treatments for sepsis patients that are likely to lead to a ‘medical dead-end,/ the point after which a patient will die no matter what care is provided.”

Forbes

Wise Systems, an AI-based delivery management platform originating from MIT’s Media Lab, has applied machine learning to real-time data to better plan delivery routes and schedules for delivery drivers, reports Susan Galer for Forbes. “The system can more accurately predict service times, taking into account the time it takes to complete a stop, and factoring in the preferences of the retailer, hotel, medical institution, or other type of client,” says Allison Parker of Wise Systems.

Mashable

MIT researchers developed a new control system for the mini robotic cheetah that allows the robot to jump and traverse uneven terrain, reports Jules Suzdaltsev for Mashable. “There’s a camera for processing real-time input from a video camera that then translates that information into body movements for the robot,” Suzdaltsev explains.

TechCrunch

MIT researchers have developed a new machine learning system that can help robots learn to perform certain social interactions, reports Brian Heater for TechCrunch. “Researchers conducted tests in a simulated environment, to develop what they deemed ‘realistic and predictable’ interactions between robots,” writes Heater. “In the simulation, one robot watches another perform a task, attempts to determine the goal and then either attempts to help or hamper it in that task.”

The Wall Street Journal

Writing for The Wall Street Journal, Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, Henry Kissinger, former secretary of state, and Eric Schmidt, former CEO of Google and former executive chairman of Google and Alphabet, explore how AI provides an opportunity for humans to redefine our roles in the world and the need to consider AI’s impact on culture, humanity and history. They underscore the importance of, “shaping AI with human values, including the dignity and moral agency of humans. In the U.S., a commission, administered by the government but staffed by many thinkers in many domains, should be established. The advancement of AI is inevitable, but its ultimate destination is not.”

TechCrunch

TechCrunch writer Devin Coldewey reports on the ReSkin project, an AI project focused on developing a new electronic skin and fingertip meant to expand the sense of touch in robots. The ReSkin project is rooted in GelSight, a technology developed by MIT researchers that allows robots to gauge an object’s hardness.

Axios

Axios reporter Alison Snyder writes that a new study by MIT researchers demonstrates how AI algorithms could provide insight into the human brain’s processing abilities. The researchers found “Predicting the next word someone might say — like AI algorithms now do when you search the internet or text a friend — may be a key part of the human brain's ability to process language,” writes Snyder.

Scientific American

Using an integrative modeling technique, MIT researchers compared dozens of machine learning algorithms to brain scans as part of an effort to better understand how the brain processes language. The researchers found that “neural networks and computational science might, in fact, be critical tools in providing insight into the great mystery of how the brain processes information of all kinds,” writes Anna Blaustein for Scientific American.

Gizmodo

Gizmodo reporter Andrew Liszewski writes that MIT researchers “used a high-resolution video camera with excellent low-light performance (the amount of sensor noise has to be as minimal as possible) to capture enough footage of a blank well that special processing techniques were able to not only see the shadow’s movements, but extrapolate who was creating them.”