Knowable Magazine
Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.”
Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.”
Prof. Daron Acemoglu highlights the importance of adopting alternative technologies in the face of AI advancements, reports Jared Newman for Fast Company. “We need investment for alternative approaches to AI, and alternative technologies, those that I would say are more centered on making workers more productive, and providing better information to workers,” says Acemoglu.
Forbes reporter Joe McKendrick spotlights a study by researchers from the MIT Center for Collective Intelligence evaluating “the performance of humans alone, AI alone, and combinations of both.” The researchers found that “human–AI systems do not necessarily achieve better results than the best of humans or AI alone,” explains graduate student Michelle Vaccaro and her colleagues. “Challenges such as communication barriers, trust issues, ethical concerns and the need for effective coordination between humans and AI systems can hinder the collaborative process.”
In an interview with CNBC, Prof. Max Tegmark highlights the importance of increased AI regulation, specifically as a method to mitigate potential harm from large language models. “All other technologies in the United States, all other industries, have some kind of safety standards,” says Tegmark. “The only industry that is completely unregulated right now, which has no safety standards, is AI.”
Researchers from MIT and elsewhere have found that “AI doesn’t even understand itself,” reports Peter Coy for The New York Times. The researchers “asked AI models to explain how they were thinking about problems as they worked through them,” writes Coy. “The models were pretty bad at introspection.”
Liquid AI, an MIT startup, is developing technology that “holds the same promise of writing, analyzing, and creating content as its rivals while using far less computing power,” reports Aaron Pressman for The Boston Globe.
Prof. David Autor has been named a Senior Fellow in the Schmidt Sciences AI2050 Fellows program, and Profs. Sara Beery, Gabriele Farina, Marzyeh Ghassemi, and Yoon Kim have been named Early Career AI2050 Fellows, reports Michael T. Nietzel for Forbes. The AI2050 fellowships provide funding and resources, while challenging “researchers to imagine the year 2050, where AI has been extremely beneficial and to conduct research that helps society realize its most beneficial impacts,” explains Nietzel.
Using a new technique developed to examine the risks of multimodal large language models used in robots, MIT researchers were able to have a “simulated robot arm do unsafe things like knocking items off a table or throwing them by describing actions in ways that the LLM did not recognize as harmful and reject,” writes Will Knight for Wired. “With LLMs a few wrong words don’t matter as much,” explains Prof. Pulkit Agrawal. “In robotics a few wrong actions can compound and result in task failure more easily.”
Researchers from MIT and elsewhere have compared 12 large language models (LLMs) against 925 human forecasters for a three-month forecasting tournament to help predict real-world events, including geopolitical events, reports Tomas Gorny for Forbes. "Our results suggest that LLMs can achieve forecasting accuracy rivaling that of human crowd forecasting tournaments,” the researchers explain.
Forbes reporter John M. Bremen spotlights a new study by MIT researchers that “shows the most skilled scientists and innovations benefitted the most from AI – doubling their productivity – while lower-skilled staff did not experience similar gains.” The study “showed that specialized AI tools foster radical innovation at the technical level within a domain-specific scope, but also risk narrowing human roles and diversity of thought,” writes Bremen.
Writing for Forbes, Senior Lecturer Guadalupe Hayes-Mota SB '08, MS '16, MBA '16 shares insight into how entrepreneurs can use AI to build successful startups. AI “can be a strategic advantage when implemented wisely and used as a tool to support, rather than replace, the human touch,” writes Hayes-Mota.
Prof. Daniela Rus, director of CSAIL, and Prof. Russ Tedrake speak with the Financial Times about how advances in AI have made it possible for robots to learn new skills and perform complex tasks. “All these cool things that we only dreamed of, we can now begin to realize,” says Rus. “Now we have to make sure that what we do with all these superpowers is good.”
Research from the Data Provenance Initiative, led by MIT researchers, has “found that many web sources used for training AI models have restricted their data, leading to a rapid decline in accessible information,” reports Gary Drenik for Forbes.
Researchers at MIT have developed a new AI model capable of assessing a patient’s risk of pancreatic cancer, reports Erez Meltzer for Forbes. “The model could potentially expand the group of patients who can benefit from early pancreatic cancer screening from 10% to 35%,” explains Meltzer. “These kinds of predictive capabilities open new avenues for preventive care.”
Arago, an AI startup co-founded by alumnus Nicolas Muller, has been named to the Future 40 list by Station F, which selects “the 40 most promising startups,” reports Romain Dillet for TechCrunch. Arago is “working on new AI-focused chips that use optical technology at the chipset level to speed up operations,” explains Dillet.