Wired
A new study by researchers at MIT suggests that “the biggest and most computationally intensive AI models may soon offer diminishing returns compared to smaller models,” reports Will Knight for Wired. “By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade.”