Skip to content ↓

Topic

Electrical engineering and computer science (EECS)

Download RSS feed: News Articles / In the Media / Audio

Displaying 31 - 45 of 1044 news clips related to this topic.
Show:

TechCrunch

Anna Sun '23 co-founded Nowadays, an AI-powered event planner, reports Julie Bort for TechCrunch. Nowadays “emails venues, caterers, and the like to gather bids,” explains Bort. “It will even make phone calls to nudge a response to unanswered emails. It then organizes the information and presents it to the event planner, who can make decisions and sign contracts.” 

Wired

Using a new technique developed to examine the risks of multimodal large language models used in robots, MIT researchers were able to have a “simulated robot arm do unsafe things like knocking items off a table or throwing them by describing actions in ways that the LLM did not recognize as harmful and reject,” writes Will Knight for Wired. “With LLMs a few wrong words don’t matter as much,” explains Prof. Pulkit Agrawal. “In robotics a few wrong actions can compound and result in task failure more easily.”

New York Times

Prof. Armando Solar-Lezama speaks with New York Times reporter Sarah Kessler about the future of coding jobs, noting that AI systems still lack many essential skills. “When you’re talking about more foundational skills, knowing how to reason about a piece of code, knowing how to track down a bug across a large system, those are things that the current models really don’t know how to do,” says Solar-Lezama.

Financial Times

Prof. Daniela Rus, director of CSAIL, and Prof. Russ Tedrake speak with the Financial Times about how advances in AI have made it possible for robots to learn new skills and perform complex tasks. “All these cool things that we only dreamed of, we can now begin to realize,” says Rus. “Now we have to make sure that what we do with all these superpowers is good.”

Forbes

Forbes contributor Michael T. Nietzel spotlights the newest cohort of Rhodes Scholars, which includes Yiming Chen '24, Wilhem Hector, Anushka Nair, and David Oluigbo from MIT. Nietzel notes that Oluigbo has “published numerous peer-reviewed articles and conducts research on applying artificial intelligence to complex medical problems and systemic healthcare challenges.” 

Associated Press

Yiming Chen '24, Wilhem Hector, Anushka Nair, and David Oluigbo have been named 2025 Rhodes Scholars, report Brian P. D. Hannon and John Hanna for the Associated Press. Undergraduate student David Oluigbo, one of the four honorees, has “volunteered at a brain research institute and the National Institutes of Health, researching artificial intelligence in health care while also serving as an emergency medical technician,” write Hannon and Hanna.

Forbes

Researchers at MIT have developed a new AI model capable of assessing a patient’s risk of pancreatic cancer, reports Erez Meltzer for Forbes. “The model could potentially expand the group of patients who can benefit from early pancreatic cancer screening from 10% to 35%,” explains Meltzer. “These kinds of predictive capabilities open new avenues for preventive care.” 

Craft in America

Craft in America visits Prof. Erik Demaine and Martin Demaine of CSAIL to learn more about their work with computational origami. “Computational origami is quite useful for the mathematical problems we are trying to solve,” Prof. Erik Demaine explains. “We try to integrate the math and the art together.”

TechCrunch

Neural Magic, an AI optimization startup co-founded by Prof. Nir Shavit and former Research Scientist Alex Matveev, aims to “process AI workloads on processors and GPUs at speeds equivalent to specialized AI chips,” reports Kyle Wiggers for TechCrunch. “By running models on off-the-shelf processors, which usually have more available memory, the company’s software can realize these performance gains,” explains Wiggers. 

New Scientist

Researchers at MIT have developed a robot capable of assembling “building blocks called voxels to build an object with almost any shape,” reports Alex Wilkins for New Scientist. “You can get furniture-scale objects really fast in a very sustainable way, because you can reuse these modular components and ask a robot to reassemble them into different large-scale objects,” says graduate student Alexander Htet Kyaw.

TechCrunch

Michael Truell '21, Sualeh Asif '22, Arvid Lunnemar '22, and Aman Sanger '22 co-founded Anysphere, an AI startup working on developing Cursor, an AI-powered coding assistant, reports Marina Temkin for TechCrunch.

Forbes

Researchers at MIT have developed a “new type of transistor using semiconductor nanowires made up of gallium antimonide and iridium arsenide,” reports Alex Knapp for Forbes. “The transistors were designed to take advantage of a property called quantum tunneling to move electricity through transistors,” explains Knapp. 

New Scientist

Researchers at MIT have developed a new virtual training program for four-legged robots by taking “popular computer simulation software that follows the principles of real-world physics and inserting a generative AI model to produce artificial environments,” reports Jeremy Hsu for New Scientist. “Despite never being able to ‘see’ the real world during training, the robot successfully chased real-world balls and climbed over objects 88 per cent of the time after the AI-enhanced training,” writes Hsu. "When the robot relied solely on training by a human teacher, it only succeeded 15 per cent of the time.”

Mashable

Graduate student Aruna Sankaranarayanan speaks with Mashable reporter Cecily Mauran the impact of political deepfakes and the importance of AI literacy, noting that the fabrication of important figures who aren’t as well known is one of her biggest concerns. “Fabrication coming from them, distorting certain facts, when you don’t know what they look like or sound like most of the time, that’s really hard to disprove,” says Sankaranarayanan.  

TechCrunch

Researchers at MIT have developed a new model for training robots dubbed Heterogeneous Pretrained Transformers (HPT), reports Brain Heater for TechCrunch. The new model “pulls together information from different sensors and different environments,” explains Heater. “A transformer was then used to pull together the data into training models. The larger the transformer, the better the output. Users then input the robot design, configuration, and the job they want done.”