Skip to content ↓

In the Media

Media Outlet:
Motherboard
Publication Date:
Description:

Motherboard reporter Tatyana Woodall writes that a new study co-authored by MIT researchers finds that AI models that can learn to perform new tasks from just a few examples create smaller models inside themselves to achieve these new tasks. “Learning is entangled with [existing] knowledge,” graduate student Ekin Akyürek explains. “We show that it is possible for these models to learn from examples on the fly without any parameter update we apply to the model.”

Related News

A blue neural network is in a dark void. A green spotlight shines down on the network and reveals a hidden layer underneath. The green light shows a new, white neural network below.

Solving a machine-learning mystery

A new study shows how large language models like GPT-3 can learn a new task from just a few examples, without the need for any new training data.