The Scientist
In an effort to better understand how protein language models (PLMs) think and better judge their reliability, MIT researchers applied a tool called sparse autoencoders, which can be used to make large language models more interpretable. The findings “may help scientists better understand how PLMs come to certain conclusions and increase researchers’ trust in them," writes Andrea Luis for The Scientist.