New Scientist
Researchers from MIT and Northwestern University have developed some guidelines for how to spot deepfakes, noting “there is no fool-proof method that always works,” reports Jeremy Hsu for New Scientist.
Researchers from MIT and Northwestern University have developed some guidelines for how to spot deepfakes, noting “there is no fool-proof method that always works,” reports Jeremy Hsu for New Scientist.
Senior lecturer Paul McDonagh-Smith speaks with Forbes reporter Joe Mckendrick about the history behind the AI hype cycle. “While AI technologies and techniques are at the forefront of today’s technological innovation, it remains a field defined — as it has from the 1950s — by both significant achievements and considerable hype," says McDonagh-Smith.
A team of MIT researchers discovered a hard limit for the “spooky” phenomenon known as quantum entanglement, reports Ben Brubaker for Quanta Magazine. The researchers found that quantum entanglement does not weaken as temperatures increase, but rather it vanishes above specific temperatures, a behavior dubbed the “sudden death” of entanglement. “It’s a very, very strong statement,” says Prof. Soonwon Choi of the findings. “I was very impressed.”
MIT alumni Mike Ng and Nikhil Buduma founded Ambiance, which has developed an “AI-powered platform geared towards improving documentation processes in medicine,” reports Fortune’s Allie Garfinkle. “In a world filled with AI solutions in search of a problem, Ambience is focusing on a pain point that just about any doctor will attest to (after all, who likes filling out paperwork?),” writes Garfinkle.
After meeting at MIT, alumni Honghao Deng and Jiani Zeng founded Butr, which makes anonymous people-detecting sensors to measure movement inside buildings, reports Zoya Hasan for Forbes. The sensors could help address staffing challenges in senior living communities, and alert staff of falls or other medical issues.
Prof. William Deringer speaks with David Westin on Bloomberg’s Wall Street Week about the power of early spreadsheet programs in the 1980s financial services world. When asked to compare today’s AI in the context of workplace automation fears, he says “one thing we know from the history of technology - and certainly the history of calculation tools that I like to study – is that the automation of some of these calculations…doesn’t necessarily lead to less work.”
Researchers at MIT have developed “a publicly available database, culled from reports, journals, and other documents to shed light on the risks AI experts are disclosing through paper, reports, and other documents,” reports Jon McKendrick for Forbes. “These benchmarked risks will help develop a greater understanding the risks versus rewards of this new force entering the business landscape,” writes McKendrick.
Edwin Olson '00, MEng '01, PhD '08 founded May Mobility, an autonomous vehicle company that uses human autonomous vehicle operators on its rides, reports Gus Alexiou for Forbes. “May Mobility is focused above all else on gradually building up the confidence of its riders and community stakeholders in the technology over the long-term,” explains Alexiou. “This may be especially true for certain more vulnerable sections of society such as the disability community where the need for more personalized and affordable forms of transportation is arguably greatest but so too is the requirement for robust safety and accessibility protocols.”
A new database of AI risks has been developed by MIT researchers in an effort to help guide organizations as they begin using AI technologies, reports Will Knight for Wired. “Many organizations are still pretty early in that process of adopting AI,” meaning they need guidance on the possible perils, says Research Scientist Neil Thompson, director of the FutureTech project.
TechCrunch reporter Kyle Wiggers writes that MIT researchers have developed a new tool, called SigLLM, that uses large language models to flag problems in complex systems. In the future, SigLLM could be used to “help technicians flag potential problems in equipment like heavy machinery before they occur.”
MIT researchers have developed an AI risk repository that includes over 70 AI risks, reports Kyle Wiggers for TechCrunch. “This is an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible and categorized risk database that anyone can copy and use, and that will be kept up to date over time,” explains Peter Slattery, a research affiliate at the MIT FutureTech project.
In an excerpt from her new book, “The Mind’s Mirror: Risk and Reward in the Age of AI," Prof. Daniela Rus, director of CSAIL, addresses the fear surrounding new AI technologies, while also exploring AI’s vast potential. “New technologies undoubtedly disrupt existing jobs, but they also create entirely new industries, and the new roles needed to support them,” writes Rus.
Prof. Daron Acemoglu speaks with NPR Planet Money hosts Greg Rosalsky and Darian Woods about the anticipated economic impacts of generative AI. Acemoglu notes he believes AI is overrated because humans are underrated. "A lot of people in the industry don't recognize how versatile, talented, multifaceted human skills and capabilities are," Acemoglu says. "And once you do that, you tend to overrate machines ahead of humans and underrate the humans."
MIT researchers have found that “when nudged to review LLM-generated outputs, humans are more likely to discover and fix errors,” reports Carter Busse for Forbes. The findings suggest that, “when given the chance to evaluate results from AI systems, users can greatly improve the quality of the outputs,” explains Busse. “The more information provided about the origins and accuracy of the results, the better the users are at detecting problems.”
Prof. Simon Johnson and Prof. David Autor speak with New York Times reporter Emma Goldberg about the anticipated impact of AI on the job market. “We should be concerned about eliminating them,” says Prof. Simon Johnson, of the risks posed by automating jobs. “This is the hollowing out of the middle class.”