Media Outlet:
New Scientist Publication Date:
Description:
FutureTech researcher Tamay Besiroglu speaks with New Scientist reporter Chris Stokel-Walker about the rapid rate at which large language models (LLMs) are improving. “While Besiroglu believes that this increase in LLM performance is partly due to more efficient software coding, the researchers were unable to pinpoint precisely how those efficiencies were gained – in part because AI algorithms are often impenetrable black boxes,” writes Stokel-Walker. “He also points out that hardware improvements still play a big role in increased performance.”