Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 376 - 390 of 1304 news clips related to this topic.
Show:

Axios

Researchers from MIT and elsewhere have developed a transparency index used to assess 10 key AI foundation models, reports Ryan Heath for Axios. Heath writes that the researchers emphasized that “unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.”

The World

Research scientist Nataliya Kosmyna speaks with The World host Chris Harland-Dunaway about the science behind using non-invasive brain scans and artificial intelligence to understand people’s different thoughts and mental images.  

Scientific American

Researchers at MIT have designed “a wearable ultrasound scanner that could be used at home to detect breast tumors earlier,” reports Simon Makin for Scientific American. “The researchers incorporated the scanner into a flexible, honeycombed 3-D-printed patch that can be fixed into a bra,” explains Makin. “The wearer moves the scanner among six different positions on the breast, where it snaps into place with magnets, allowing reproducible scanning of the whole breast.”

The Atlantic

Writing for The Atlantic, Prof. Deb Roy makes the case that “new kinds of social networks can be designed for constructive communication—for listening, dialogue, deliberation, and mediation—and they can actually work.” Roy adds: “We can and should create social networks designed for public discourse that prioritize inclusion, where underheard voices and perspectives can flourish, and where people take and offer disagreement in good faith.”

Forbes

Tom Davenport, a visiting scholar at the MIT Initiative on the Digital Economy, writes for Forbes about how organizations are approaching generative AI. “If organizations are to succeed with generative AI, they need to increase the focus on data preparation for it, which is a primary prerequisite for success,” writes Davenport.

CBS Boston

Graduate student Kaylee Cunningham speaks with CBS Boston about her work using social media to help educate and inform the public about nuclear energy. Cunningham, who is known as Ms. Nuclear Energy on TikTok, recalls how as a child she was involved in musical theater, a talent she has now combined with her research interests as an engineer. She adds that she also hopes her platform inspires more women to pursue STEM careers. “You don't have to look like the stereotypical engineer,” Cunningham emphasizes.

Fortune

Graduate student Sarah Gurev and her colleagues have developed a new AI system named EVEscape that can, “predict alterations likely to occur to viruses as they evolve,” reports Erin Prater for Fortune. Gurev says that with the amount of data the system has amassed, it “can make surprisingly accurate predications.”

TechCrunch

Arvid Lunnemark '22, Michael Truell '22, Sualeh Asif '22, and Aman Sanger '22 co-founded Anysphere, a startup building an “‘AI-native’” software development environment, called Cursor,” reports Kyle Wiggers for TechCrunch. “In the next several years, our mission is to make programming an order of magnitude faster, more fun and creative,” says Truell. “Our platform enables all developers to build software faster.”

Forbes

Curtis Northcutt SM '17, PhD '21, Jonas Mueller PhD '18, and Anish Athalye SB '17, SM '17, PhD '23 have co-founded Cleanlab, a startup aimed at fixing data problems in AI models, reports Alex Konrad for Forbes. “The reality is that every single solution that’s data-driven — and the world has never been more data-driven — is going to be affected by the quality of the data,” says Northcutt.

Axios

Axios reporter Alison Snyder writes about how a new study by MIT researchers finds that preconceived notions about AI chatbots can impact people’s experiences with them. Prof. Pattie Maes explains, the technology's developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned, but we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don't just depend on the AI and the quality of the AI. It depends on how the human responds to the AI.”

Scientific American

MIT researchers have found that user bias can drive interactions with AI chatbots, reports Nick Hilden for Scientific American.  “When people think that the AI is caring, they become more positive toward it,” graduate student Pat Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”

The Boston Globe

Prof. Thomas Kochan and Prof. Thomas Malone speak with Boston Globe reporter Hiawatha Bray about the recent deal between the Writers Guild of America and the Alliance of Motion Picture and Television Producers, which will “protect movie screenwriters from losing their jobs to computers that could use artificial intelligence to generate screenplays.” Kochan notes that when it comes to AI, “where workers don’t have a voice through a union, most companies are not engaging their workers on these issues, and the workers have no rights, no redress.”

Fortune

Researchers from MIT and elsewhere have identified some of the benefits and disadvantages of generative AI when used for specific tasks, reports Paige McGlauflin and Joseph Abrams for Fortune. “The findings show a 40% performance boost for consultants using the chatbot for the creative product project, compared to the control group that did not use ChatGPT, but a 23% decline in performance when used for business problem-solving,” explain McGlauflin and Abrams.

Forbes

Researchers from Atlantic Quantum, an MIT startup building quantum computers, have published new research showing “the architecture of the circuits underlying its quantum computer produces far fewer errors than the industry standard,” reports Rashi Shrivastava for Forbes.

The Wall Street Journal

A study by researchers from MIT and Harvard examined the potential impact of the use of AI technologies on the field of radiology, reports Laura Landro for The Wall Street Journal. “Both AI models and radiologists have their own unique strengths and areas for improvement,” says Prof. Nikhil Agarwal.