ABC News
Prof. David Autor speaks with ABC News reporter Max Zahn about how AI will affect the job market. "We're not good at predicting what the new work will be; we're good at predicting how current work will change," says Autor.
Prof. David Autor speaks with ABC News reporter Max Zahn about how AI will affect the job market. "We're not good at predicting what the new work will be; we're good at predicting how current work will change," says Autor.
A study by MIT researchers has found “our behavior is often more predictable than we think,” reports Diane Hamilton for Forbes. “This research focused on how people pay attention in complex situations,” explains Hamilton. “The AI model learned what people remembered and what they ignored. It identified patterns in memory and focus.”
Prof. Danielle Li speaks with New York Times reporter Noam Scheiber about the various impacts of AI in the workplace on employees. “That state of the world is not good for experienced workers,” says Li. “You’re being paid for the rarity of your skill, and what happens is that A.I. allows the skill to live outside of people.”
Forbes reporter Eric Wood spotlights various studies by MIT researchers exploring the impact of ChatGPT use on behavior and the brain. “As stated, the impact of AI assistants is likely dependent on the users, but since AI assistants are becoming normative, it’s time for counseling centers to assess for maladaptive uses of AI, while also promoting the possible benefits,” explains Wood.
Prof. Asu Ozdaglar, Deputy Dean of MIT Schwarzman College of Computing, speaks with Is Business Broken? podcast host Curt Nickish to explore AI’s opportunities and risks — and whether it can be regulated without stifling progress. “AI is a very promising and transformative technology,” says Ozdaglar. “But regulation should be designed very carefully so that it does not block or impede the development of the technology.” Given AI’s potential harms or misuses, she added that it's important to think about the correct regulatory framework. “For it to be successful, it should focus on where harms can come from.”
A new research paper by Prof. David Autor and Principal Research Scientist Neil Thompson explores the forthcoming impact of AI on jobs, reports Tim Harford for Financial Times. “[W]hile there are few certainties, Autor and Thompson’s framework does suggest a clarifying question: does AI look like it is going to do the most highly skilled part of your job or the low-skill rump that you’ve not been able to get rid of?,” writes Harford. “The answer to that question may help to predict whether your job is about to get more fun or more annoying — and whether your salary is likely to rise, or fall as your expert work is devalued like the expert work of the Luddites.”
Researchers at MIT have “analyzed how six popular LLMs portray the state of press freedom — and, indirectly, trust in the media — in responses to user prompts,” reports Chase DiBenedetto for Mashable. “The results showed that LLMs consistently suggested that countries have less press freedom than official reports, like the non-governmental ranking like the World Press Freedom Index (WPFI), published by Reporters Without Borders,” explains DiBenedetto.
In an Opinion piece for The New York Times, columnist David Brooks highlights a recent MIT study that explores the impact of ChatGPT use on brain function by asking subjects to write essays while using large language models, traditional search engines, or only their own brains. “The subjects who relied only on their own brains showed higher connectivity across a bunch of brain regions,” explains Brooks. “Search engine users experienced less brain connectivity and A.I. users least of all.”
A study by MIT researchers monitored and compared the brain activity of participants using large language models, traditional search engines, and only their brains to write an essay on a given topic, reports Hessie Jones for Forbes. The study “found that the brain-only group showed much more active brain waves compared to the search-only and LLM-only groups,” Jones explains. “In the latter two groups, participants relied on external sources for information. The search-only group still needed some topic understanding to look up information, and like using a calculator — you must understand its functions to get the right answer. In contrast, the LLM-only group simply had to remember the prompt used to generate the essay, with little to no actual cognitive processing involved.”
Researchers at MIT have found that ChatGPT users “showed minimal brain engagement and consistently fell short in neural linguistic, and behavioral aspects,” reports Kyle Wiggers for TechCrunch. “To conduct the test, the lab split 54 participants from the Boston area into three groups, each consisting of individuals ages 18 to 39,” explains Wiggers. “The participants were asked to write multiple SAT essays using tools such as OpenAI’s ChatGPT, the Google search engine, or without any tools.”
Forbes contributor Tanya Fileva spotlights how MIT CSAIL researchers have developed a system called Air-Guardian, an “AI-enabled copilot that monitors a pilot’s gaze and intervenes when their attention is lacking.” Fileva notes that “in tests, the system ‘reduced the risk level of flights and increased the success rate of navigating to target points’—demonstrating how AI copilots can enhance safety by assisting with real-time decision-making.”
The New Yorker reporter Kyle Chayka spotlights a study by MIT researchers examining the impact of AI chatbot use on the brain. “The results from the analysis showed a dramatic discrepancy: subjects who used ChatGPT demonstrated less brain activity than either of the other groups,” explains Chayka.
Researchers from MIT have found that “extended use of LLMs for research and writing could have long-term behavioral effects, such as lower brain engagement and laziness,” reports Theo Burman for Newsweek. “The study found that the AI-assisted writers were engaging their deep memory processes far less than the control groups, and that their information recall skills were worse after producing work with ChatGPT,” explains Burman.
Researchers at MIT have found that “AI agents can make the workplace more productive when fine-tuned for different personality types, but human co-workers pay a price in lost socialization,” reports Kaustuv Basu for Bloomberg. The researchers concluded “found that humans using AI raised their productivity by 60%—partly because those workers sent 23% fewer social messages,” writes Basu.
Sloan Lecturer Michael Schrage speaks with Fortune reporter Sheryl Estrada about prompt-a-thons, “structured, sprint-based sessions for developing prompts for large language models (LLMs).” The “prompt-a-thon process reframes prompting as a high-impact diagnostic and design discipline—engineered for fast, actionable insight,” explains Estrada. “It’s not just about using AI more effectively—it’s about thinking and collaborating more intelligently with it,” adds Schrage.