Explained: Generative AI’s environmental impact
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Rapid development and deployment of powerful generative AI models comes with environmental consequences, including increased electricity demand and water consumption.
Assistant Professor Manish Raghavan wants computational techniques to help solve societal problems.
Professor who develops technologies to push the envelope of what is possible with photonics and electronic devices succeeds Joel Voldman.
The advance holds the promise to reduce error-correction resource overhead.
With their recently-developed neural network architecture, MIT researchers can wring more information out of electronic structure calculations.
As the use of generative AI continues to grow, Lincoln Laboratory's Vijay Gadepally describes what researchers and consumers can do to help mitigate its environmental impact.
Inspired by the human vocal tract, a new AI model can produce and understand vocal imitations of everyday sounds. The method could help build new sonic interfaces for entertainment and education.
The Thermochromorph printmaking technique developed by CSAIL researchers allows images to transition into each other through changes in temperature.
Biodiversity researchers tested vision systems on how well they could retrieve relevant nature images. More advanced models performed well on simple queries but struggled with more research-specific prompts.
Five MIT faculty and staff, along with 19 additional alumni, are honored for electrical engineering and computer science advances.
MIT engineers developed AI frameworks to identify evidence-driven hypotheses that could advance biologically inspired materials.
With models like AlphaFold3 limited to academic research, the team built an equivalent alternative, to encourage innovation more broadly.
Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
The “PRoC3S” method helps an LLM create a viable action plan by testing each step in a simulation. This strategy could eventually aid in-home robots to complete more ambiguous chore requests.
In a recent commentary, a team from MIT, Equality AI, and Boston University highlights the gaps in regulation for AI models and non-AI algorithms in health care.