Skip to content ↓

Topic

Natural language processing

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 11 of 11 news clips related to this topic.
Show:

Popular Mechanics

Researchers at CSAIL have created three “libraries of abstraction” – a collection of abstractions within natural language that highlight the importance of everyday words in providing context and better reasoning for large language models, reports Darren Orf for Popular Mechanics. “The researchers focused on household tasks and command-based video games, and developed a language model that proposes abstractions from a dataset,” explains Orf. “When implemented with existing LLM platforms, such as GPT-4, AI actions like ‘placing chilled wine in a cabinet' or ‘craft a bed’ (in the Minecraft sense) saw a big increase in task accuracy at 59 to 89 percent, respectively.”

Quanta Magazine

MIT researchers have developed a new procedure that uses game theory to improve the accuracy and consistency of large language models (LLMs), reports Steve Nadis for Quanta Magazine. “The new work, which uses games to improve AI, stands in contrast to past approaches, which measured an AI program’s success via its mastery of games,” explains Nadis. 

Forbes

Prof. Jacob Andreas explored the concept of language guided program synthesis at CSAIL’s Imagination in Action event, reports research affiliate John Werner for Forbes. “Language is a tool,” said Andreas during his talk. “Not just for training models, but actually interpreting them and sometimes improving them directly, again, in domains, not just involving languages (or) inputs, but also these kinds of visual domains as well.”

Wired

Wired reporter Will Knight spotlights a study by researchers from MIT and other universities that finds judges are turning to Wikipedia for guidance when making legal decisions. “The researchers also found evidence that the use of Wikipedia reflects an already stretched system,” writes Knight. “The legal decisions that included Wikipedia-influenced citations were most often seen in the lower courts, which they suspect reflects how overworked the judges are.”

Independent

Researchers from CSAIL and elsewhere have found that Irish judges are using Wikipedia articles as a source in their rulings, reports Shane Phelan for Independent. “This work shows that Wikipedia reaches even farther than that, into high-stakes, formalized processes like legal judgments,” says research scientist Neil Thompson. “The worst outcome would be for a judge’s reliance on Wikipedia to lead them to decide a case differently than they would have if they had read either an expert secondary source or the cited precedent itself.”

Popular Science

Researchers from CSAIL, Cornell University, and Maynooth University have released a study concluding that judges in Ireland are utilizing Wikipedia articles to help inform their decisions, reports Colleen Hagerty for Popular Science. Based on their findings, the researchers suggest “the legal community increases its efforts to monitor and fact-check legal information posted on Wikipedia.” 

STAT

A study co-authored by MIT researchers finds that algorithms based on clinical medical notes can predict the self-identified race of a patient, reports Katie Palmer for STAT. “We’re not ready for AI — no sector really is ready for AI — until they’ve figured out that the computers are learning things that they’re not supposed to learn,” says Principal Research Scientist Leo Anthony Celi.

The Daily Beast

Researchers at MIT and Harvard Medical School have created an artificial intelligence program that can accurately identify a patient’s race based off medical images, reports Tony Ho Tran for The Daily Beast. “The reason we decided to release this paper is to draw attention to the importance of evaluating, auditing, and regulating medical AI,” explains Principal Research Scientist Leo Anthony Celi.

Forbes

Recent MIT research has found a high number of errors in public datasets often used for training models, reports David Talby for Forbes. “An average of 3.3% errors were found in the test sets of 10 of the most widely used computer vision, natural language processing (NLP) and audio datasets,” writes Talby.

Quartz

MIT researchers are applying machine learning algorithms typically used for natural language processing to identify coronavirus variants, reports Brian Browdie for Quartz. “Besides being able to quantify the potential for mutations to escape, the research may pave the way for vaccines that broaden the body’s defenses against variants or that protect recipients against more than one virus, such as flu and the novel coronavirus, in a single shot,” writes Browdie. 

NPR

Shankar Vedantam of NPR reports on Dr. Boris Katz’s new research examining how errors in written English can reveal clues about other languages. “By analyzing the patterns of mistakes that native speakers of two languages make in English, the computer can say, look, these two languages might actually be related to one another,” Vedantam explains.