Enabling privacy-preserving AI training on everyday devices
A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings.
A new method could bring more accurate and efficient AI models to high-stakes applications like health care and finance, even in under-resourced settings.
Ultra-efficient chip design enables extremely strong cryptography algorithms to run on energy-constrained edge devices.
By enabling two chips to authenticate each other using a shared fingerprint, this technique can improve privacy and energy efficiency.
New research demonstrates how AI models can be tested to ensure they don’t cause harm by revealing anonymized patient health data.
A new study shows public views on data privacy vary according to how the data are used, who benefits, and other conditions.
The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.
A first history of the document security technology, co-authored by MIT Libraries’ Jana Dambrogio, provides new tools for interdisciplinary research.
In a new MIT course co-taught by EECS and philosophy professors, students tackle moral dilemmas of the digital age.
New “Oreo” method from MIT CSAIL researchers removes footprints that reveal where code is stored before a hacker can see them.
The consortium will bring researchers and industry together to focus on impact.
The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.
PhD student Mariel García-Montes researches the internet’s far-reaching impact on society, especially regarding privacy and young people.
Researchers developed an easy-to-use tool that enables an AI practitioner to find data that suits the purpose of their model, which could improve accuracy and reduce bias.
AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?
Researchers have developed a security solution for power-hungry AI models that offers protection against two common attacks.