Skip to content ↓

Topic

MIT-IBM Watson AI Lab

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 24 news clips related to this topic.
Show:

The Daily Beast

MIT researchers have developed a new technique “that could allow most large language models (LLMs) like ChatGPT to retain memory and boost performance,” reports Tony Ho Tran for the Daily Beast. “The process is called StreamingLLM and it allows for chatbots to perform optimally even after a conversation goes on for more than 4 million words,” explains Tran.

TechCrunch

MIT researchers have used machine learning to uncover the different kinds of sentences that most likely to activate the brain’s key language processing centers, reports Kyle Wiggers and Devin Coldewey for TechCrunch. The model, “was able to predict for novel sentences whether they would be taxing on human cognition or not,” they explain.

Scientific American

Researchers from MIT and elsewhere have developed a new AI technique for teaching robots to pack items into a limited space while adhering to a range of constraints, reports Nick Hilden for Scientific American. “We want to have a learning-based method to solve constraints quickly because learning-based [AI] will solve faster, compared to traditional methods,” says graduate student Zhutian “Skye” Yang.

TechCrunch

Researchers from MIT and Harvard have explored astrocytes, a group of brain cells, from a computational perspective and developed a mathematical model that shows how they can be used to build a biological transformer, reports Kyle Wiggers for TechCrunch. “The brain is far superior to even the best artificial neural networks that we have developed, but we don’t really know exactly how the brain works,” says research staff member Dmitry Krotov. “There is scientific value in thinking about connections between biological hardware and large-scale artificial intelligence networks. This is neuroscience for AI and AI for neuroscience.

Popular Science

MIT researchers have developed SoftZoo, “an open framework platform that simulated a variety of 3D model animals performing specific tasks in multiple environmental settings,” reports Andrew Paul for Popular Science. “This computational approach to co-designing the soft robot bodies and their brains (that is, their controllers) opens the door to rapidly creating customized machines that are designed for a specific task,” says CSAIL director, Prof. Daniela Rus.

TechCrunch

Researchers at MIT have developed “SoftZoo,” a platform designed to “study the physics, look and locomotion and other aspects of different soft robot models,” reports Brian Heater for TechCrunch. “Dragonflies can perform very agile maneuvers that other flying creatures cannot complete because they have special structures on their wings that change their center of mass when they fly,” says graduate student Tsun-Hsuan Wang. “Our platform optimizes locomotion the same way a dragonfly is naturally more adept at working through its surroundings.”

WHDH 7

Researchers at MIT have created a four-legged robot called DribbleBot, reports Caroline Goggin for WHDH. The robot “can dribble a soccer ball under the same conditions as humans, using onboard sensors to travel across different types of terrain.”

Popular Science

Popular Science reporter Andrew Paul spotlights how researchers from MIT CSAIL have developed a soccer-playing robot, dubbed DribbleBot, that can handle a variety of real-world terrains. “DribbleBot showcases extremely impressive strides in articulation and real-time environmental analysis. Using a combination of onboarding computing and sensing, the team’s four-legged athlete can reportedly handle gravel, grass, sand, snow, and pavement, as well as pick itself up if it falls.”

TechCrunch

MIT researchers have created “Dribblebot,” a four-legged robot capable of playing soccer across varying terrain, reports Brian Heater for TechCrunch.

Boston.com

Researchers at MIT have created a four-legged robot capable of dribbling a soccer ball and running across a variety of terrains, reports Ross Cristantiello for Boston.com. “Researchers hope that they will be able to teach the robot how to lift a ball over a step in the future,” writes Cristantiello. “They will also explore how the technology behind DribbleBot can be applied to other robots, allowing machines to quickly transport a range of objects around outside using legs and arms.”

TechCrunch

MIT researchers have developed Robust MADER, an updated version of a previous system developed in 2020 to help drones avoid in-air collisions, reports Brian Heater for TechCrunch. “The new version adds in a delay before setting out on a new trajectory,” explains Heater. “That added time will allow it to receive and process information from fellow drones and adjust as needed.”

Fast Company

Researchers from the MIT-IBM Watson AI Lab and the Harvard Natural Language Processing Group developed the Giant Language model Test Room (GLTR), an algorithm that attempts to detect if text was written by a bot, reports Megan Morrone for Fast Company. “Using the ‘it takes one to know one’ method, if the GLTR algorithm can predict the next word in a sentence, then it will assume that sentence has been written by a bot,” explains Morrone.

TechCrunch

MIT researchers have developed a new hardware that offers faster computation for artificial intelligence with less energy, reports Kyle Wiggers for TechCrunch. “The researchers’ processor uses ‘protonic programmable resistors’ arranged in an array to ‘learn’ skills” explains Wiggers.

New Scientist

Postdoctoral researcher Murat Onen  and his colleagues have created “a nanoscale resistor that transmits protons from one terminal to another,” reports Alex Wilkins for New Scientist. “The resistor uses powerful electric fields to transport protons at very high speeds without damaging or breaking the resistor itself, a problem previous solid-state proton resistors had suffered from,” explains Wilkins.

The New York Times

In an article for The New York Times exploring whether humans are the only species able to comprehend geometry, Siobhan Roberts spotlights Prof. Josh Tenenbaum’s approach to exploring how humans can extract so much information from minimal data, time, and energy. “Instead of being inspired by simple mathematical ideas of what a neuron does, it’s inspired by simple mathematical ideas of what thinking is,” says Tenenbaum.