Skip to content ↓

Topic

Electrical engineering and computer science (EECS)

Download RSS feed: News Articles / In the Media / Audio

Displaying 181 - 195 of 1057 news clips related to this topic.
Show:

Scientific American

A new study by MIT researchers demonstrates how “machine-learning systems designed to spot someone breaking a policy rule—a dress code, for example—will be harsher or more lenient depending on minuscule-seeming differences in how humans annotated data that were used to train the system,” reports Ananya for Scientific American. “This is an important warning for a field where datasets are often used without close examination of labeling practices, and [it] underscores the need for caution in automated decision systems—particularly in contexts where compliance with societal rules is essential,” says Prof. Marzyeh Ghassemi.

Popular Science

Popular Science reporter Andrew Paul writes that MIT researchers have developed a new long-range, low-power underwater communication system. Installing underwater communication networks “could help continuously measure a variety of oceanic datasets such as pressure, CO2, and temperature to refine climate change modeling,” writes Paul, “as well as analyze the efficacy of certain carbon capture technologies.”

Forbes

Forbes reporter Rob Toews spotlights Prof. Daniela Rus, director of CSAIL, and research affiliate Ramin Hasani and their work with liquid neural networks. “The ‘liquid’ in the name refers to the fact that the model’s weights are probabilistic rather than constant, allowing them to vary fluidly depending on the inputs the model is exposed to,” writes Toews.

Forbes

Venti Technologies, which was co-founded by MIT researchers and alumni, is working to build autonomous vehicles for industrial and global supply chain hubs, reports Bruce Rogers for Forbes. “Working with the world's leading port operator provides Venti the opportunity to bring the economics of autonomous vehicles to over 60 ports globally,” writes Rogers. “These ports operate 24/7 requiring 2-3 shifts of human drivers.”

Nature

Nature contributor David Chandler writes about the late Prof. Edward Fredkin and his impact on computer science and physics. “Fredkin took things even further, concluding that the whole Universe could actually be seen as a kind of computer,” explains Chandler. “In his view, it was a ‘cellular automaton’: a collection of computational bits, or cells, that can flip states according to a defined set of rules determined by the states of the cells around them. Over time, these simple rules can give rise to all the complexities of the cosmos — even life.”

The Boston Globe

Prof. Jessika Trancik speaks with Boston Globe reporter Aruni Soni about her new study that finds reducing the cost of solar energy will be accelerated by improvements in soft tech. “We found that the soft technology involved in solar energy really has not changed and hasn’t improved nearly as quickly as the hardware,” says Trancik. “These soft costs, in many systems, can be 50 percent or even more of the total cost of solar electricity.”

Popular Science

Using techniques inspired by kirigami, a Japanese paper-cutting technique, MIT researchers have developed a “a novel method to manufacture plate lattices – high performance materials useful in automotive and aerospace designs,” reports Andrew Paul for Popular Science. “The kirigami-augmented plate lattices withstood three times as much force as standard aluminum corrugation designs,” writes Paul. “Such variations show immense promise for lightweight, shock-absorbing sections needed within cars, planes, and spacecraft." 

MSNBC

Graduate students Martin Nisser and Marisa Gaetz co-founded Brave Behind Bars, a program designed to provide incarcerated individuals with coding and digital literacy skills to better prepare them for life after prison, reports Morgan Radford for MSNBC. Computers and coding skills “are really kind of paramount for fostering success in the modern workplace,” says Nisser.

The Guardian

Prof. D. Fox Harrell writes for The Guardian about the importance of ensuring AI systems are designed to “reflect the ethically positive culture we truly want.” Harrell emphasizes that: “We need to be aware of, and thoughtfully design, the cultural values that AI is based on. With care, we can build systems based on multiple worldviews – and address key ethical issues in design such as transparency and intelligibility."

Wired

Undergraduate student Isabella Struckman and Sofie Kupiec ’23 reached out to the first hundred signatories of the Future of Life Institute’s open letting calling for a pause on AI development to learn more about their motivations and concerns, reports Will Knight for Wired. “The duo’s write-up of their findings reveals a broad array of perspectives among those who put their name to the document,” writes Knight. “Despite the letter’s public reception, relatively few were actually worried about AI posing a looming threat to humanity.”

TechCrunch

Prof. Daniela Rus, director of CSAIL, speaks with TechCrunch reporter Brain Heater about liquid neural networks and how this emerging technology could impact robotics. “The reason we started thinking about liquid networks has to do with some of the limitations of today’s AI systems,” says Rus, “which prevent them from being very effective for safety, critical systems and robotics. Most of the robotics applications are safety critical.”

TechCrunch

Vaikkunth Mugunthan MS ’19 PhD ‘22 and Christian Lau MS ’20, PhD ’22 co-founded DynamoFL – a software company that “offers software to bring large language models (LLMs) to enterprise and fine-tune those models on sensitive data,” reports Kyle Wiggers for TechCrunch. “Generative AI has brought to the fore new risks, including the ability for LLMs to ‘memorize’ sensitive training data and leak this data to malicious actors,” says Mugunthan. “Enterprises have been ill-equipped to address these risks, as properly addressing these LLM vulnerabilities would require recruiting teams of highly specialized privacy machine learning researchers to create a streamlined infrastructure for continuously testing their LLMs against emerging data security vulnerabilities.”

Boston.com

MIT researchers have developed a new tool called “PhotoGuard” that can help protect images from AI manipulation, reports Ross Cristantiello for Boston.com. The tool “is designed to make real images resistant to advanced models that can generate new images, such as DALL-E and Midjourney,” writes Cristantiello.

The Boston Globe

Ivan Sutherland PhD ’63, whose work “laid some of the foundations of the digital world that surrounds us today,” speaks with Boston Globe columnist Scott Kirsner about the importance of fun and play in advancing technological research. “You’re no good at things you think aren’t fun,” Sutherland said. If you want to expand the scope of what’s possible today, he noted, “you need to play around with stuff to understand what it will do, and what it won’t do.”