Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 640 news clips related to this topic.
Show:

The Wall Street Journal

Prof. Armando Solar-Lezama speaks with The Wall Street Journal reporter Isabelle Bousquette about large language models (LLMs) in academia. Instead of building LLMs from scratch, Solar-Lezama suggests “students and researchers are focused on developing applications and even creating synthetic data that could be used to train LLMs,” writes Bousquette. 

TechCrunch

With Using multimodal sensing and a soft robotic manipulator, MIT scientists have developed an automated system, called RoboGrocery, that can pack groceries of different sizes and weights, reports Brian Heater for TechCrunch. Heater explains that as the soft robotic gripper touches an item, “pressure sensors in the fingers determine that they are, in fact, delicate and therefore should not go at the bottom of the bag — something many of us no doubt learned the hard way. Next, it notes that the soup can is a more rigid structure and sticks it in the bottom of the bag.”

Vox

Prof. Yoon Kim speaks with Vox reporter Adam Clark Estes on how to address hallucinations and misinformation within large language models. “I don't think we'll ever be at a stage where we can guarantee that hallucinations won't exist,” says Kim. “But I think there's been a lot of advancements in reducing these hallucinations, and I think we'll get to a point where they'll become good enough to use.”

Forbes

Researchers from MIT have developed RoboGrocery, a soft robotic system that “can determine how to pack a grocery item based on its weight, size and shape without causing damage to the item,” reports Jennifer Kite-Powell for Forbes. “This is more than just automation—it's a paradigm shift that enhances precision, reduces waste and adapts seamlessly to the diverse needs of modern retail logistics,” says Prof. Daniela Rus, director of CSAIL. 

Fast Company

Writing for Fast Company, Moshe Tanach highlights how researchers from the MIT Lincoln Laboratory Supercomputing Center are developing new technologies to reduce AI energy costs, such as power-capping hardware and tools that can halt AI training. 

BBC

MIT scientists have developed a “four-fingered robotic hand which is capable of rotating balls and toys in any direction and orientation,” reports Maisie Lillywhite for BBC News. “The improvement in dexterity could have significant implications for automating tasks such as handling goods for supermarkets or sorting through waste for recycling,” Lillywhite writes.

Forbes

Writing for Forbes, lecturer Guadalupe Hayes-Mota '08, SM '16, MBA '16 explores the role of artificial intelligence and biotechnology in transforming the healthcare industry specifically for venture capitalists (VCs). “The fusion of AI and biotechnology presents a wealth of opportunities for venture capitalists,” writes Hayes-Mota. “By staying attuned to emerging trends and adopting strategies for impactful investments, VCs can drive innovation and create transformative changes in healthcare.” 

Economist

MIT researchers have improved upon the diffusion models used in AI image generation, reports Alok Jha for The Economist. Working with electrically charged particles, the team created “Poisson flow generative models,” which “generate images of equal or better quality than state-of-the-art diffusion models, while being less error-prone and requiring between ten and 20 times fewer computational steps,” Jha explains. 

The Washington Post

Prof. Regina Barzilay spoke at The Futurist Summit: The Age of AI – an event hosted by The Washington Post – about the influence of AI in medicine. “When we're thinking today how many years it takes to bring new technologies [to market], sometimes it's decades if we’re thinking about drugs, and very, very slow,” Barzilay explains. “With AI technologies, you've seen how fast the technology that you're using today is changing.”

The Washington Post

Washington Post reporter Carolyn Johnson spotlights how Prof. Laura Schulz and her colleagues have been exploring why ChatGPT-4  performs well on conversation and cognitive tests, but flunks reasoning tests that are easy for young children. Schulz makes the case that to understand intelligence and create it, childhood learning processes should not be discounted. “That’s the kind of intelligence that really might give us a big picture,” Schulz explains. “The kind of intelligence that starts not as a blank slate, but with a lot of rich, structured knowledge — and goes on to not only understand everything we have ever understood, across the species, but everything we will ever understand.”

The Guardian

Researchers at MIT have designed an “AI-powered chatbot that simulates a user’s older self and dishes out observations and pearls of wisdom,” reports Ian Sample for The Guardian. “The goal is to promote long-term thinking and behavior change,” says graduate student Pat Pataranutaporn. “This could motivate people to make wiser choices in the present that optimize for their long-term wellbeing and life outcomes.”

The Boston Globe

Boston Globe reporter James McCown highlights the architectural design of the new MIT Schwarzman College of Computing, noting that it is, “the most exciting work of academic architecture in Greater Boston in a generation.”Dean Daniel Huttenlocher adds: “The building was designed to be the physical embodiment of the college’s mission of fortifying studies in computer science and artificial intelligence. The building’s transparent and open design is already drawing a mix of people from throughout the campus and beyond.”

The Economist

Prof. Regina Barzilay joins The Economist’s “Babbage” podcast to discuss how artificial intelligence could enable health care providers to understand and treat diseases in new ways. Host Alok Jha notes that Barzilay is determined to “overcome those challenges that are standing in the way of getting AI models to become useful in health care.” Barzilay explains: “I think we really need to change our mindset and think how we can solve the many problems for which human experts were unable to find a way forward.”  

Scientific American

Current AI models require enormous resources and often provide unpredictable results. But graduate student Ziming Liu and colleagues have developed an approach that surpasses current neural networks in many respects, reports Manion Bischoff for Scientific American. “So-called Kolmogorov-Arnold networks (KANs) can master a wide range of tasks much more efficiently and solve scientific problems better than previous approaches,” Bischoff explains.

Financial Times

Financial Times reporter Robin Wigglesworth spotlights Prof. Daron Acemoglu’s new research that predicts relatively modest productivity growth from AI advances. On generative AI specifically, Acemoglu believes that gains will remain elusive unless industry reorients “in order to focus on reliable information that can increase the marginal productivity of different kinds of workers, rather than prioritizing the development of general human-like conversational tools,” he says.