Skip to content ↓

Topic

Artificial intelligence

Download RSS feed: News Articles / In the Media / Audio

Displaying 121 - 135 of 1173 news clips related to this topic.
Show:

Forbes

Senior lecturer Paul McDonagh-Smith speaks with Forbes reporter Joe Mckendrick about the history behind the AI hype cycle. “While AI technologies and techniques are at the forefront of today’s technological innovation, it remains a field defined — as it has from the 1950s — by both significant achievements and considerable hype," says McDonagh-Smith. 

Business Insider

Researchers at MIT are working toward training AI models “as subject-matter experts that ethically tailor financial advice to an individual’s circumstances,” reports Tanza Loudenback for Business Insider. “We think we’re about two or three years away before we can demonstrate a piece of software that by SEC regulatory guidelines will satisfy fiduciary duty,” says Prof. Andrew Lo. 

TechCrunch

TechCrunch reporter Kyle Wiggers spotlights Codeium, a generative AI coding company founded by MIT alums Varun Mohan SM '17 and Douglas Chen '17. Codeium’s platform is run by generative AI models trained on public code, providing suggestions in the context of an app’s entire codebase. “Many of the AI-driven solutions provide generic code snippets that require significant manual work to integrate and secure within existing codebases,” Mohan  explains. “That’s where our AI coding assistance comes in.” 

Fortune

MIT alumni Mike Ng and Nikhil Buduma founded Ambiance, which has developed an “AI-powered platform geared towards improving documentation processes in medicine,” reports Fortune’s Allie Garfinkle. “In a world filled with AI solutions in search of a problem, Ambience is focusing on a pain point that just about any doctor will attest to (after all, who likes filling out paperwork?),” writes Garfinkle. 

The Boston Globe

Writing for The Boston Globe, President Emeritus L. Rafael Reif makes the case that “without strong research universities and the scientific and technological advances they discover and invent, the United States could not possibly keep up with China.” He emphasizes that “punishing universities financially for their failings — real and imagined — would be counterproductive. If anything, the China challenge demands that universities do more than they are already doing — and that they have the resources to do so.”

Bloomberg

Prof. William Deringer speaks with David Westin on Bloomberg’s Wall Street Week about the power of early spreadsheet programs in the 1980s financial services world. When asked to compare today’s AI in the context of workplace automation fears, he says “one thing we know from the history of technology - and certainly the history of calculation tools that I like to study – is that the automation of some of these calculations…doesn’t necessarily lead to less work.”

Forbes

Researchers at MIT have developed “a publicly available database, culled from reports, journals, and other documents to shed light on the risks AI experts are disclosing through paper, reports, and other documents,” reports Jon McKendrick for Forbes. “These benchmarked risks will help develop a greater understanding the risks versus rewards of this new force entering the business landscape,” writes McKendrick. 

Wired

A new database of AI risks has been developed by MIT researchers in an effort to help guide organizations as they begin using AI technologies, reports Will Knight for Wired. “Many organizations are still pretty early in that process of adopting AI,” meaning they need guidance on the possible perils, says Research Scientist Neil Thompson, director of the FutureTech project.   

TechCrunch

TechCrunch reporter Kyle Wiggers writes that MIT researchers have developed a new tool, called SigLLM, that uses large language models to flag problems in complex systems. In the future, SigLLM could be used to “help technicians flag potential problems in equipment like heavy machinery before they occur.” 

TechCrunch

MIT researchers have developed an AI risk repository that includes over 70 AI risks, reports Kyle Wiggers for TechCrunch. “This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible and categorized risk database that anyone can copy and use, and that will be kept up to date over time,” explains Peter Slattery, a research affiliate at the MIT FutureTech project.  

BBC News

Prof. Regina Barzilay joins  BBC host Caroline Steel and other AI experts to discuss her inspiration for applying AI technologies to help improve medicine and fight cancer. “I think that in cancer and in many other diseases, the big question is always, how do you deal with uncertainty? It's all the matter of predictions," says Barzilay. "Unfortunately, today, we rely on humans who don't have this capacity to make predictions. As a result, many times people get wrong treatments or they are diagnosed much later.” 

Fast Company

In an excerpt from her new book, “The Mind’s Mirror: Risk and Reward in the Age of AI," Prof. Daniela Rus, director of CSAIL, addresses the fear surrounding new AI technologies, while also exploring AI’s vast potential. “New technologies undoubtedly disrupt existing jobs, but they also create entirely new industries, and the new roles needed to support them,” writes Rus.  

NPR

Prof. Daron Acemoglu speaks with NPR Planet Money hosts Greg Rosalsky and Darian Woods about the anticipated economic impacts of generative AI. Acemoglu notes he believes AI is overrated because humans are underrated. "A lot of people in the industry don't recognize how versatile, talented, multifaceted human skills and capabilities are," Acemoglu says. "And once you do that, you tend to overrate machines ahead of humans and underrate the humans."

Fortune

Writing for Fortune, Prof. Daron Acemoglu explores the estimated scale of AI’s impact on the labor market and productivity. “The problem with the AI bubble isn’t that it is bursting and bringing the market down,” writes Acemoglu. “It’s that the hype will likely go on for a while and do much more damage in the process than experts are anticipating." 

Forbes

MIT researchers have found that “when nudged to review LLM-generated outputs, humans are more likely to discover and fix errors,” reports Carter Busse for Forbes. The findings suggest that, “when given the chance to evaluate results from AI systems, users can greatly improve the quality of the outputs,” explains Busse. “The more information provided about the origins and accuracy of the results, the better the users are at detecting problems.”