Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 16 - 30 of 1247 news clips related to this topic.
Show:

New York Times

Institute Prof. Daron Acemoglu participated in a “global dialogue on artificial intelligence governance” at the United Nations, reports Steve Lohr for The New York Times. “The AI quest is currently focused on automating a lot of things, sidelining and displacing workers,” says Acemoglu. 

Forbes

Researchers from MIT and Stanford tracked 11 large language models during the 2024 presidential campaign, and found that “AI models answered differently overtime… [and] they changed in response to events, prompts, and even demographic cues,” reports Ron Schmelzer for Forbes

Boston.com

According to the U.S. News & World Report rankings for 2025-2026, MIT has been named the No. 2 best university in the United States, reports Madison Lucchesi for Boston.com

New York Times

MIT has been named the second best university in the United States, according to the U.S. News and World Report rankings for 2025-2026, reports Alan Blinder for The New York Times

The Boston Globe

U.S. News & World Report has named MIT the number two best university in the United States for 2025-2026, reports Emily Sweeney for The Boston Globe. The rankings “evaluated more than 1,700 colleges and universities in the United States, using up to 17 measures of academic quality and graduate success,” adds Sweeney. 

Newsweek

MIT has been named the number two college in the United States in U.S. News & World Report’s annual ranking, reports Alia Shoaib for Newsweek. “U.S. News & World Report ranks more than 1,700 colleges using a weighted formula that considers factors such as graduation and retention rates, faculty resources, academic reputation, financial resources and student selectivity,” explains Shoaib. 

Financial Times

Financial Times reporter Melissa Heikkilä spotlights how MIT researchers have uncovered evidence that increased use of AI tools by medical professionals risks “leading to worse health outcomes for women and ethnic minorities.” One study found that numerous AI models “recommended a much lower level of care for female patients,” writes Heikkilä. “A separate study by the MIT team showed that OpenAI’s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.” 

CNN

Prof. Dylan Hadfield-Menell speaks with CNN reporter Hadas Gold about the need for AI safeguards and increased education on large language models. “The way these systems are trained is that they are trained in order to give responses that people judge to be good,” explains Hadfield-Menell. 

CBS

Prof. David Autor speaks with David Pogue of CBS Sunday Morning about how AI is impacting the labor market, in particular opportunities for entry-level job seekers. “My view is there is great potential and great risk,” Autor explains. “I think that it's not nearly as imminent in either direction as most people think." On the impacts for young job seekers, Autor emphasizes that “this is really a concern. Judgment, expertise, it's acquired slowly. It's possible that we could strip out so much of the supporting work, that people never get the expertise. I don't think it's an insurmountable concern. But we shouldn't take for granted that it will solve itself."

The Wall Street Journal

Prof. Andrew Lo speaks with Wall Street Journal reporter Cheryl Winokur Munk about how AI tools could be used to help people with financial planning. Winokur Munk writes that Lo recommends providing “just enough information to get relevant answers. And leave out highly personal details like your name, address, salary, employer or specific assets…as such details put people at risk should the AI be compromised.” Lo also advises “trying several AI platforms,” writes Winokur Munk. And “the advice should be run by a professional, trusted family or friends. Be a bit skeptical and double-check with humans.”

Boston Globe

Prof. Marzyeh Ghassemi speaks with Boston Globe reporter Hiawatha Bray about her work uncovering issues with bias and trustworthiness in medical AI systems. “I love developing AI systems,” says Ghassemi. “I’m a professor at MIT for a reason. But it’s clear to me that naive deployments of these systems, that do not recognize the baggage that human data comes with, will lead to harm.”

Politico

Prof. Daniela Rus, director of CSAIL and “one of the world’s foremost thinkers on the intersection of machines and artificial intelligence,” shares her views on the promise of embodied intelligence, which would allow machines to adapt in real-time; the development of AI agents; and how the US can lead on the development of AI technologies with Aaron Mak of Politico. “The U.S. government has invested in energy grids, railroads and the internet. In the AI age, it must treat high-performance compute, data stewardship and model evaluation pipelines as public infrastructure as well,” Rus explains. 

Interesting Engineering

Researchers at MIT have “developed an antenna that can adjust its frequency range by physically changing in its shape” reports Mrigakshi Dixit for Interesting Engineering. “Instead of standard, rigid metal, this antenna is made from metamaterials — special engineered materials whose properties are based on their geometric structure,” explains Dixit. “It could be suitable for applications like transferring energy to wearable devices, tracking motion for augmented reality, and enabling wireless communication.”

CBS News

Prof. Daniela Rus, director of CSAIL, speaks with CBS News reporter Tony Dokoupil about her work developing AI-powered robots. “AI and robots are tools,” says Rus. “They are tools created by the people, for the people. And like any other tools they’re not inherently good or bad; they are what we choose to do with them. And I believe we can choose to do extraordinary things.” 

Fast Company

Prof. Philip Isola speaks with Fast Company reporter Victor Dey about the impact and use of agentic AI. “In some domains we truly have automatic verification that we can trust, like theorem proving in formal systems. In other domains, human judgment is still crucial,” says Isola. “If we use an AI as the critic for self-improvement, and if the AI is wrong, the system could go off the rails.”