Skip to content ↓

Topic

Human-computer interaction

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 54 news clips related to this topic.
Show:

Popular Science

Tomás Vega SM '19 is CEO and co-founder of Augmental, a startup helping people with movement impairments interact with their computer devices, reports Popular Science’s Andrew Paul. Seeking to overcome the limitations of most brain-computer interfaces, the company’s first product is the MouthPad, leveraging the tongue muscles.“Our hope is to create an interface that is multimodal, so you can choose what works for you,” said Vega. “We want to be accommodating to every condition.”

Popular Mechanics

Researchers at CSAIL have created three “libraries of abstraction” – a collection of abstractions within natural language that highlight the importance of everyday words in providing context and better reasoning for large language models, reports Darren Orf for Popular Mechanics. “The researchers focused on household tasks and command-based video games, and developed a language model that proposes abstractions from a dataset,” explains Orf. “When implemented with existing LLM platforms, such as GPT-4, AI actions like ‘placing chilled wine in a cabinet' or ‘craft a bed’ (in the Minecraft sense) saw a big increase in task accuracy at 59 to 89 percent, respectively.”

Quanta Magazine

MIT researchers have developed a new procedure that uses game theory to improve the accuracy and consistency of large language models (LLMs), reports Steve Nadis for Quanta Magazine. “The new work, which uses games to improve AI, stands in contrast to past approaches, which measured an AI program’s success via its mastery of games,” explains Nadis. 

TechCrunch

Researchers at MIT have found that large language models mimic intelligence using linear functions, reports Kyle Wiggers for TechCrunch. “Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them,” writes Wiggers. 

Politico

MIT researchers have found that “when an AI tool for radiologists produced a wrong answer, doctors were more likely to come to the wrong conclusion in their diagnoses,” report Daniel Payne, Carmen Paun, Ruth Reader and Erin Schumaker for Politico. “The study explored the findings of 140 radiologists using AI to make diagnoses based on chest X-rays,” they write. “How AI affected care wasn’t dependent on the doctors’ levels of experience, specialty or performance. And lower-performing radiologists didn’t benefit more from AI assistance than their peers.”

The Daily Beast

MIT researchers have developed a new technique “that could allow most large language models (LLMs) like ChatGPT to retain memory and boost performance,” reports Tony Ho Tran for the Daily Beast. “The process is called StreamingLLM and it allows for chatbots to perform optimally even after a conversation goes on for more than 4 million words,” explains Tran.

Financial Times

Writing for Financial Times, economist Ann Harrison spotlights research by Prof. Daron Acemoglu, Pascual Restrepo PhD '16 and Prof. David Autor, that explores the impact of automation on jobs in the United States. Acemoglu and Restrepo have “calculated that each additional robot in the US eliminates 3.3 workers” and that “most of the increase in inequality is due to workers who perform routine tasks being hit by automation,” writes Harrison.

Tech Times

MIT CSAIL researchers have developed a new air safety system, called Air-Guardian, that is designed to serve as a “proactive co-pilot, enhancing safety during critical moments of flight,” reports Jace Dela Cruz for Tech Times

Axios

Axios reporter Alison Snyder writes about how a new study by MIT researchers finds that preconceived notions about AI chatbots can impact people’s experiences with them. Prof. Pattie Maes explains, the technology's developers “always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned, but we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don't just depend on the AI and the quality of the AI. It depends on how the human responds to the AI.”

Scientific American

MIT researchers have found that user bias can drive interactions with AI chatbots, reports Nick Hilden for Scientific American.  “When people think that the AI is caring, they become more positive toward it,” graduate student Pat Pataranutaporn explains. “This creates a positive reinforcement feedback loop where, at the end, the AI becomes much more positive, compared to the control condition. And when people believe that the AI was manipulative, they become more negative toward the AI—and it makes the AI become more negative toward the person as well.”

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

Forbes

Prof. Jacob Andreas explored the concept of language guided program synthesis at CSAIL’s Imagination in Action event, reports research affiliate John Werner for Forbes. “Language is a tool,” said Andreas during his talk. “Not just for training models, but actually interpreting them and sometimes improving them directly, again, in domains, not just involving languages (or) inputs, but also these kinds of visual domains as well.”

Science

In conversation with Matthew Huston at Science, Prof. John Horton discusses the possibility of using chatbots in research instead of humans. As he explains, a change like that would be similar to the transition from in-person to online surveys, “"People were like, ‘How can you run experiments online? Who are these people?’ And now it’s like, ‘Oh, yeah, of course you do that.’”

Forbes

Researchers from MIT have found that using generative AI chatbots can improve the speed and quality of simple writing tasks, but often lack factual accuracy, reports Richard Nieva for Forbes. “When we first started playing with ChatGPT, it was clear that it was a new breakthrough unlike anything we've seen before,” says graduate student Shakked Noy. “And it was pretty clear that it was going to have some kind of labor market impact.”