Skip to content ↓

Topic

Computer science and technology

Download RSS feed: News Articles / In the Media / Audio

Displaying 46 - 60 of 1038 news clips related to this topic.
Show:

ShareAmerica

ShareAmerica reporter Lauren Monsen spotlights Prof. Dina Katabi for her work in advancing medicine with artificial intelligence. “Katabi develops AI tools to monitor patients’ breathing patterns, hear rate, sleep quality, and movements,” writes Monsen. “This data informs treatment for patients with diseases such as Parkinson’s, Alzheimer’s, Crohn’s, and ALS (amyotrophic lateral sclerosis), as well as Rett syndrome, a rare neurological disorder.”

Interesting Engineering

MIT researchers have developed a machine-learning accelerator chip to make health-monitoring apps more secure, reports Aman Tripathi for Interesting Engineering. “The researchers subjected this new chip to intensive testing, simulating real-world hacking attempts, and the results were impressive,” explains Tripathi. “Even after millions of attempts, they were unable to recover any private information. In contrast, stealing data from an unprotected chip took only a few thousand samples.”

Bloomberg

Bloomberg Opinion columnist Parmy Olson spotlights a new study by MIT researchers that finds AI chatbots can be highly persuasive when reinforced with facts and could potentially be used to help tackle conspiracy theories. “The scientists invited more than 2,000 people who believed in different conspiracy theories to summarize their positions to a chatbot — powered by OpenAI’s latest publicly available language model — and briefly debate them with the bot,” Olson writes. “On average, participants subsequently described themselves as 20% less confident in the conspiracy theory; their views remained softened even two months later.” 

Bloomberg

Researchers at MIT have found that AI can “be remarkably persuasive when reinforced with facts,” reports Parmy Olson for Bloomberg. “The scientists invited more than 2,000 people who believed in different conspiracy theories to summarize their positions to a chatbot — powered by OpenAI’s latest publicly available language model — and briefly debate them with the bot,” explains Olson. “On average, participants subsequently described themselves as 20% less confident in the conspiracy theory; their views remained softened even two months later.”

Scientific American

Scientific American reporter Riis Williams explores how MIT researchers created “smart gloves” that have tactile sensors woven into the fabric to help teach piano and make other hands-on activities easier. “Hand-based movements like piano playing are normally really subjective and difficult to record and transfer,” explains graduate student Yiyue Luo. “But with these gloves we are actually able to track one person’s touch experience and share it with another person to improve their tactile learning process.”

Forbes

MIT and Google are offering a free Generative AI for Educators course “designed to help middle and high school teachers learn how to use generative AI tools to personalize instruction, develop creative lessons and save time on administrative tasks,” reports Jack Kelly for Forbes.

Nature

Nature reporter Amanda Heidt speaks with postdoctoral researcher Tigist Tamir about her experience using generative AI with attention-deficit hyperactivity disorder. “Whether I’m reading, writing or just making to-do lists, it’s very difficult for me to figure out what I want to say. One thing that helps is to just do a brain dump and use AI to create a boiled-down version,” Tamir explains. She adds, “I feel fortunate that I’m in this era where these tools exist.”

New Scientist

Postdoc Xuhai Xu and his colleagues have developed an AI program that can distribute pop-up reminders to help limit smartphone screen time, reports Jeremy Hsu for New Scientist. Xu explains that “a random notification to stop doomscrolling won’t always tear someone away from their phone. But machine learning can personalize that intervention so it arrives at the moment when it is most likely to work,” writes Hsu.

TechCrunch

Researchers at MIT have found that large language models mimic intelligence using linear functions, reports Kyle Wiggers for TechCrunch. “Even though these models are really complicated, nonlinear functions that are trained on lots of data and are very hard to understand, there are sometimes really simple mechanisms working inside them,” writes Wiggers. 

Fortune

A new report by researchers from MIT and Boston Consulting Group (BCG) has uncovered “how AI-based machine learning and predictive analytics are super-powering key performance indictors  (KPIs),” reports Sheryl Estrada for Fortune. “I definitely see marketing, manufacturing, supply chain, and financial folks using these value-added formats to upgrade their existing KPIs and imagine new ones,” says visiting scholar Michael Schrage.

New Scientist

FutureTech researcher Tamay Besiroglu speaks with New Scientist reporter Chris Stokel-Walker about the rapid rate at which large language models (LLMs) are improving. “While Besiroglu believes that this increase in LLM performance is partly due to more efficient software coding, the researchers were unable to pinpoint precisely how those efficiencies were gained – in part because AI algorithms are often impenetrable black boxes,” writes Stokel-Walker. “He also points out that hardware improvements still play a big role in increased performance.”

Boston Magazine

A number of MIT faculty and alumni – including Prof. Daniela Rus, Prof. Regina Barzilay, Research Affiliate Haddad Habib, Research Scientist Lex Fridman, Marc Raibert PhD '77, former Postdoc Rana El Kaliouby and Ray Kurzweil '70 – have been named key figures “at the forefront of Boston’s AI revolution,” reports Wyndham Lewis for Boston Magazine. These researchers are “driving progress and reshaping the way we live,” writes Lewis.

Forbes

In an article for Forbes, Sloan Research Scientist Ranjan Pal and Prof. Bodhibrata Nag of the Indian Institute of Management Calcutta highlight the  risks associated with the rise of Internet of things-driven smart cities and homes. “Unlike traditional catastrophic bond markets, where the (natural) catastrophe does not affect financial stability, a cyber-catastrophe can affect financial stability,” they write. “Hence, more information is needed by bond writing parties to screen cyber-risk exposure to guarantee no threat to financial stability.”

Bloomberg

Prof. David Autor speaks with Bloomberg’s Odd Lots podcast hosts Joe Weisenthal and Tracy Alloway about how AI could be leveraged to improve inequality, emphasizing the policy choices governments will need to make to ensure the technology is beneficial to humans. “Automation is not the primary source of how innovation improves our lives,” says Autor. “Many of the things we do with new tools is create new capabilities that we didn’t previously have.”