If art is how we express our humanity, where does AI fit in?
MIT postdoc Ziv Epstein SM ’19, PhD ’23 discusses issues arising from the use of generative AI to make art and other media.
MIT postdoc Ziv Epstein SM ’19, PhD ’23 discusses issues arising from the use of generative AI to make art and other media.
Selecting the right method gives users a more accurate picture of how their model is behaving, so they are better equipped to correctly interpret its predictions.
A new study finds human supervisors have the potential to reduce barriers to deploying autonomous vehicles.
MIT student creates Tim the Beaver in virtual reality using the MIT.nano Immersion Lab.
The CSAIL scientist describes natural language processing research through state-of-the-art machine-learning models and investigation of how language can enhance other types of artificial intelligence.
Models trained using common data-collection techniques judge rule violations more harshly than humans would, researchers report.
Experts convene to peek under the hood of AI-generated code, language, and images as well as its capabilities, limitations, and future impact.
The method enables a model to determine its confidence in a prediction, while using no additional data and far fewer computing resources than other methods.
New fellows are working on health records, robot control, pandemic preparedness, brain injuries, and more.
This year's fellows will work across research areas including telemonitoring, human-computer interactions, operations research, AI-mediated socialization, and chemical transformations.
New data suggest most of the growth in the wage gap since 1980 comes from automation displacing less-educated workers.
Researchers make headway in solving a longstanding problem of balancing curious “exploration” versus “exploitation” of known pathways in reinforcement learning.
Mary Ellen Zurko pioneered user-centered security in the 1990s. Now she’s using those insights to help the nation thwart influence operations.
The faculty members will work together to advance the cross-cutting initiative of the MIT Schwarzman College of Computing.
“Interpretability methods” seek to shed light on how machine-learning models make predictions, but researchers say to proceed with caution.