Skip to content ↓

Topic

Technology and society

Download RSS feed: News Articles / In the Media / Audio

Displaying 496 - 510 of 1341 news clips related to this topic.
Show:

Axios

As part of an effort to address racism and discrimination, MIT researchers have developed a new VR role-playing project, dubbed “On the Plane,” writes Axios reporter Russell Contreras. "Our hope is that (players) move away from the experience with an understanding of how xenophobia and other forms of discrimination may play out in everyday life situations," explains CSAIL Research Scientist Caglar Yildirim.

The Wall Street Journal

Prof. Stuart Madnick speaks with Wall Street Journal reporter Seán Captain about how AI could make scamming easier and more dangerous. AI “raises the level of skepticism that we must have substantially,” notes Madnick. “Procedures will have to be put in place to validate the authenticity of who you are dealing with.”

Education Week

Prof. Cynthia Breazeal, the MIT dean of digital learning, speaks with Education Week reporter Alyson Klein about the importance of ensuring K-12 students are AI literate. “The AI genie is out of the bottle,” says Breazeal. “It’s not just in the realm of computer science and coding. It is affecting all aspects of society. It’s the machine under everything. It’s critical for all students to have AI literacy if they are going to be using computers, or really, almost any type of technology.”

New Scientist

Prof. Benedetto Marelli and his colleagues have created “packaging that can react to changes in the food it contains to better indicate when it has gone bad,” reports Karmela Padavic-Callaghan for New Scientist. The biodegradable plastic-like wrap, which is made from silk, changes color when it is exposed to rotting foods and degrades quickly in soil. 

Los Angeles Times

Writing for The Los Angeles Times, Institute Prof. Daron Acemoglu and Prof. Simon Johnson make the case that the development of artificial intelligence should be shifted “toward a focus on ‘machine usefulness,’ the idea that computers should primarily enhance human capabilities. But this needs to be combined with an explicit recognition that any resulting productivity gains must be shared with workers, in terms of higher incomes and better working conditions.”

NPR

Prof. Marzyeh Ghassemi speaks with NPR host Emily Kwong and correspondent Geoff Brumfiel about how artificial intelligence could impact medicine. “When you take state-of-the-art machine-learning methods and systems and then evaluate them on different patient groups, they do not perform equally,” says Ghassemi.

TechCrunch

Augmental, an MIT spinoff, has developed MouthPad, an assistive device that provides wearers the ability to control Bluetooth-connected devices using their tongue, reports Haje Jan Kamps for TechCrunch. “The wide variety of control options embedded into the MouthPad means that it can be used in conjunction with many different devices,” writes Kamps.

NBC Boston

Researchers from MIT and Stanford have found that “artificial intelligence tools like chatbots helped boost worker productivity at one tech company by 14%” reports Jennifer Liu for NBC Boston. “The study is thought to be the first major real-world application of generative AI in the workplace,” writes Liu. “Researchers measured productivity of more than 5,000 customer support agents, based primarily in the Philippines, at a Fortune 500 enterprise software firm over the course of a year.”

WHDH 7

Augmental, a startup co-founded by MIT graduates, has developed a Bluetooth mouthpiece that makes it easier for individuals with mobility issues to use computers, reports WHDH. “People with severe hand impairment are isolated in this world and it’s just not fair,” says co-founder Tomás Vega SM ‘19. “So, our interface seeks to help those people and enable them to access and to share with the world.” 

Gizmodo

Researchers from MIT and elsewhere have found that experienced workers might be more impacted by ChatGPT, reports Mack DeGeurin for Gizmodo. “Customer support agents using a generative AI conversation assistant in a new study saw a 14% uptick in productivity compared to others who didn’t use the tool,” writes DeGeurin.

The New York Times

New York Times reporter John Markoff spotlights Ivan Sutherland PhD ’63 and his contributions to the development of modern computing. Markoff notes that while working on his PhD thesis at MIT, Sutherland “created Sketchpad on a Lincoln TX-2 computer and started a revolution in computer graphics.”

Wired

Researchers from MIT and elsewhere published a paper exploring the abilities of language models and how they differ from those of humans, reports Will Knight for Wired. Prof. Josh Tenenbaum says “GPT-4 is remarkable but quite different from human intelligence in a number of ways,” writes Knight. “It lacks the kind of motivation that is crucial to the human mind.”

NPR

Prof. Danielle Wood speaks with NPR Shortwave co-host Aaron Scott about the future of space sustainability. “I hope that humans pause and note that the actions we're taking now and in the next 10 years really are going to be decisive in the relationship between humans and our planet, and humans and other locations, like the Moon,” says Wood.

Financial Times

Financial Times correspondent Rana Foroohar spotlights Prof. Daron Acemoglu and Prof. Simon Johnson’s new book, “Power and Progress,” which “explores several moments over the last millennium when technology led to the opposite of shared prosperity.” In the book, Acemoglu and Johnson “take a different approach to the productivity gains of technology and how they get distributed compared with most of their peers.”

Scientific American

Prof. Marzyeh Ghassemi speaks with Scientific American reporter Sara Reardon about the impact of AI chatbots on medical care. “Ghassemi is particularly concerned that chatbots will perpetuate the racism, sexism and other types of prejudice that persist in medicine—and across the Internet,” writes Reardon. “Scrubbing racism from the Internet is impossible, but Ghassemi says developers may be able to do preemptive audits to see where a chatbot gives biased answers and tell it to stop or to identify common biases that pop up in its conversations with users.”