Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 659 news clips related to this topic.
Show:

Forbes

In an article for Forbes, Robert Clark spotlights how MIT researchers developed a new model to predict irrational behaviors in humans and AI agents in suboptimal conditions. “The goal of the study was to better understand human behavior to improve collaboration with AI,” Clark writes. 

New Scientist

Researchers from MIT and Northwestern University have developed some guidelines for how to spot deepfakes, noting “there is no fool-proof method that always works,” reports Jeremy Hsu for New Scientist

Business Insider

Researchers at MIT are working toward training AI models “as subject-matter experts that ethically tailor financial advice to an individual’s circumstances,” reports Tanza Loudenback for Business Insider. “We think we’re about two or three years away before we can demonstrate a piece of software that by SEC regulatory guidelines will satisfy fiduciary duty,” says Prof. Andrew Lo. 

TechCrunch

TechCrunch reporter Kyle Wiggers spotlights Codeium, a generative AI coding company founded by MIT alums Varun Mohan SM '17 and Douglas Chen '17. Codeium’s platform is run by generative AI models trained on public code, providing suggestions in the context of an app’s entire codebase. “Many of the AI-driven solutions provide generic code snippets that require significant manual work to integrate and secure within existing codebases,” Mohan  explains. “That’s where our AI coding assistance comes in.” 

Forbes

Researchers at MIT have developed “a publicly available database, culled from reports, journals, and other documents to shed light on the risks AI experts are disclosing through paper, reports, and other documents,” reports Jon McKendrick for Forbes. “These benchmarked risks will help develop a greater understanding the risks versus rewards of this new force entering the business landscape,” writes McKendrick. 

Wired

A new database of AI risks has been developed by MIT researchers in an effort to help guide organizations as they begin using AI technologies, reports Will Knight for Wired. “Many organizations are still pretty early in that process of adopting AI,” meaning they need guidance on the possible perils, says Research Scientist Neil Thompson, director of the FutureTech project.   

TechCrunch

TechCrunch reporter Kyle Wiggers writes that MIT researchers have developed a new tool, called SigLLM, that uses large language models to flag problems in complex systems. In the future, SigLLM could be used to “help technicians flag potential problems in equipment like heavy machinery before they occur.” 

TechCrunch

MIT researchers have developed an AI risk repository that includes over 70 AI risks, reports Kyle Wiggers for TechCrunch. “This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible and categorized risk database that anyone can copy and use, and that will be kept up to date over time,” explains Peter Slattery, a research affiliate at the MIT FutureTech project.  

BBC News

Prof. Regina Barzilay joins  BBC host Caroline Steel and other AI experts to discuss her inspiration for applying AI technologies to help improve medicine and fight cancer. “I think that in cancer and in many other diseases, the big question is always, how do you deal with uncertainty? It's all the matter of predictions," says Barzilay. "Unfortunately, today, we rely on humans who don't have this capacity to make predictions. As a result, many times people get wrong treatments or they are diagnosed much later.” 

Fast Company

In an excerpt from her new book, “The Mind’s Mirror: Risk and Reward in the Age of AI," Prof. Daniela Rus, director of CSAIL, addresses the fear surrounding new AI technologies, while also exploring AI’s vast potential. “New technologies undoubtedly disrupt existing jobs, but they also create entirely new industries, and the new roles needed to support them,” writes Rus.  

NPR

Prof. Daron Acemoglu speaks with NPR Planet Money hosts Greg Rosalsky and Darian Woods about the anticipated economic impacts of generative AI. Acemoglu notes he believes AI is overrated because humans are underrated. "A lot of people in the industry don't recognize how versatile, talented, multifaceted human skills and capabilities are," Acemoglu says. "And once you do that, you tend to overrate machines ahead of humans and underrate the humans."

Forbes

MIT researchers have found that “when nudged to review LLM-generated outputs, humans are more likely to discover and fix errors,” reports Carter Busse for Forbes. The findings suggest that, “when given the chance to evaluate results from AI systems, users can greatly improve the quality of the outputs,” explains Busse. “The more information provided about the origins and accuracy of the results, the better the users are at detecting problems.” 

Tech Briefs

Research Scientist Mathieu Huot speaks with Tech Briefs reporter Andrew Corselli about his work with GenSQL, a generative AI system for databases that “could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.” 

TechCrunch

Intelmatix, an AI startup founded by by Almaha Almalki MS '18, Anas Alfaris MS '09, PhD '09 and Ahmad Alabdulkareem PhD '18, aims to provide businesses in the Middle East and North Africa with access to AI for decision-making, reports Annie Njanja for TechCrunch. . “The idea of democratizing access to AI has always been something that we’ve been very passionate about,” says Alfaris. 

 

Scientific American

Prof. Sherry Turkle shares the benefits of being polite when interacting with AI technologies, reports Webb Wright for Scientific American, underscoring the risks of becoming habituated to using crass, disrespectful and dictatorial language. “We have to protect ourselves,” says Turkle. “Because we’re the only ones that have to form relationships with real people.”