Skip to content ↓

Topic

Human-computer interaction

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 47 news clips related to this topic.
Show:

The Daily Beast

MIT researchers have developed a new technique “that could allow most large language models (LLMs) like ChatGPT to retain memory and boost performance,” reports Tony Ho Tran for the Daily Beast. “The process is called StreamingLLM and it allows for chatbots to perform optimally even after a conversation goes on for more than 4 million words,” explains Tran.

Financial Times

Writing for Financial Times, economist Ann Harrison spotlights research by Prof. Daron Acemoglu, Pascual Restrepo PhD '16 and Prof. David Autor, that explores the impact of automation on jobs in the United States. Acemoglu and Restrepo have “calculated that each additional robot in the US eliminates 3.3 workers” and that “most of the increase in inequality is due to workers who perform routine tasks being hit by automation,” writes Harrison.

Axios

Axios reporter Alison Snyder writes about how a new study by MIT researchers finds that preconceived notions about AI chatbots can impact people’s experiences with them. Prof. Pattie Maes explains, the technology's developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned, but we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don't just depend on the AI and the quality of the AI. It depends on how the human responds to the AI.”

Scientific American

MIT researchers have found that user bias can drive interactions with AI chatbots, reports Nick Hilden for Scientific American.  “When people think that the AI is caring, they become more positive toward it,” graduate student Pat Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

Forbes

Prof. Jacob Andreas explored the concept of language guided program synthesis at CSAIL’s Imagination in Action event, reports research affiliate John Werner for Forbes. “Language is a tool,” said Andreas during his talk. “Not just for training models, but actually interpreting them and sometimes improving them directly, again, in domains, not just involving languages (or) inputs, but also these kinds of visual domains as well.”

Science

In conversation with Matthew Huston at Science, Prof. John Horton discusses the possibility of using chatbots in research instead of humans. As he explains, a change like that would be similar to the transition from in-person to online surveys, “"People were like, ‘How can you run experiments online? Who are these people?’ And now it’s like, ‘Oh, yeah, of course you do that.’”

Forbes

Researchers from MIT have found that using generative AI chatbots can improve the speed and quality of simple writing tasks, but often lack factual accuracy, reports Richard Nieva for Forbes. “When we first started playing with ChatGPT, it was clear that it was a new breakthrough unlike anything we've seen before,” says graduate student Shakked Noy. “And it was pretty clear that it was going to have some kind of labor market impact.”

The Conversation

Writing for The Conversation, postdoc Ziv Epstein SM ’19, PhD ’23, graduate student Robert Mahari and Jessica Fjeld of Harvard Law School explore how the use of generative AI will impact creative work. “The ways in which existing laws are interpreted or reformed – and whether generative AI is appropriately treated as the tool it is – will have real consequences for the future of creative expression,” the authors note.  

Mashable

Researchers at MIT have developed a drone that can be controlled using hand gestures, reports Mashable. “I think it’s important to think carefully about how machine learning and robotics can help people to have a higher quality of life and be more productive,” says postdoc Joseph DelPreto. “So we want to combine what robots do well and what people do well so that they can be more effective teams.”

National Geographic

National Geographic reporter Maya Wei-Haas explores how the ancient art of origami is being applied to fields such a robotics, medicine and space exploration. Wei-Haas notes that Prof. Daniela Rus and her team developed a robot that can fold to fit inside a pill capsule, while Prof. Erik Demaine has designed complex, curving fold patterns. “You get these really impressive 3D forms with very simple creasing,” says Demaine.

Fortune

MIT researchers have found that “automation is the primary reason the income gap between more and less educated workers has continued to widen,” reports Ellen McGirt for Fortune. “This single one variable…explains 50 to 70% of the changes or variation between group inequality from 1980 to about 2016,” says Prof. Daron Acemoglu

Politico

Prof. Daron Acemoglu speaks with Politico reporter Derek Robertson about his new study examining the impacts of automation on the workforce and economy. “This discussion gets framed around ‘Will robots and AI destroy jobs, and lead to a jobless future,’ and I think that's the wrong framing,” says Acemoglu. “Industrial robots may have reduced U.S. employment by half a percent, which is not trivial, but nothing on that scale [of a “jobless future”] has happened — but if you look at the inequality implications, it's been massive.”

TechCrunch

TechCrunch reporter Brian Heater spotlights a new study by Prof. Daron Acemoglu that examines the impact of automation on the workforce. “We’re starting with a very clear premise here: in 21st-century America, the wealth gap is big and only getting bigger,” writes Heater. “The paper, ‘Tasks, Automation, and the Rise in U.S. Wage Inequality,’ attempts to explore the correlation between the growing income gap and automation.”

Popular Science

Popular Science reporter Andrew Paul writes that a study co-authored by Institute Prof. Daron Acemoglu examines the impact of automation on the workforce over the past four decades and finds that “‘so-so automation’ exacerbates wage gaps between white and blue collar workers more than almost any other factor.”