Skip to content ↓

Topic

Human-computer interaction

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 85 news clips related to this topic.
Show:

Financial Times

Financial Times reporter Melissa Heikkilä spotlights how MIT researchers have uncovered evidence that increased use of AI tools by medical professionals risks “leading to worse health outcomes for women and ethnic minorities.” One study found that numerous AI models “recommended a much lower level of care for female patients,” writes Heikkilä. “A separate study by the MIT team showed that OpenAI’s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.” 

Interesting Engineering

Researchers at MIT have “developed an antenna that can adjust its frequency range by physically changing in its shape” reports Mrigakshi Dixit for Interesting Engineering. “Instead of standard, rigid metal, this antenna is made from metamaterials — special engineered materials whose properties are based on their geometric structure,” explains Dixit. “It could be suitable for applications like transferring energy to wearable devices, tracking motion for augmented reality, and enabling wireless communication.”

NBC Boston

Prof. Andrew Lo speaks with NBC Boston reporter Daniela Gonzalez about how AI tools could be used as a starting point to help people manage their monthly expenses and improve their savings strategies. Lo notes that AI tools “can tell you, given the kind of things you're looking to purchase, where the various deals might be.” He added that “once you get the feedback, you have to make sure that what you're getting is legit, versus what they call hallucinations that large language models are likely to do on occasion.”

Bloomberg

Prof. Andrew Lo speaks with Bloomberg reporter Lu Wang about how AI tools could be applied to the financial services industry, working alongside humans to help manage money, balance risk, tailor strategies and possibly even act in a client’s best interest. “I believe that within the next five years we’re going to see a revolution in how humans interact with AI,” says Lo. He adds that “the financial services industry has extra layers of protection that needs to be built before these tools can be useful.”

NBC News

Researchers at MIT have uncovered a variety of obstacles of AI in software development, reports Rob Wile for NBC News. They have found “the main obstacles come when AI programs are asked to develop code at scale, or with more complex logic,” writes Wile. 

Forbes

Forbes contributor Tanya Fileva spotlights how MIT CSAIL researchers have developed a system called Air-Guardian, an “AI-enabled copilot that monitors a pilot’s gaze and intervenes when their attention is lacking.” Fileva notes that “in tests, the system ‘reduced the risk level of flights and increased the success rate of navigating to target points’—demonstrating how AI copilots can enhance safety by assisting with real-time decision-making.”

TN Tecno

[Originally in Spanish] MIT researchers have developed a new technique to educate robots by increasing human input, reports Uriel Bederman for TN Tecno.  “We can’t expect non-technical people to collect data and fine-tune a neural network model," explains graduate student Felix Yanwei Wang. "Consumers will expect the robot to work right out of the box, and if it doesn’t, they’ll want an intuitive way to customize it. That’s the challenge we’re addressing in this work."

The Wall Street Journal

Postdoctoral Associate Pat Pataranutaporn speaks with Wall Street Journal reporter Heidi Mitchell about his work developing Future You, an online interactive AI platform that “allows users to create a virtual older self—a chatbot that looks like an aged version of the person and is based on an AI text system known as a large language model, then personalized with information that the user puts in.” Pataranutaporn explains: “I want to encourage people to think in the long term, to be less anxious about an unknown future so they can live more authentically today.” 

Knowable Magazine

Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.” 

VICE

Researchers at MIT and elsewhere have developed “Future You” – an AI platform that uses generative AI to allow users to talk with an AI-generated simulation of a potential future you, reports Sammi Caramela for Vice. The research team hopes “talking to a relatable, virtual version of your future self about your current stressors, future goals, and your beliefs can improve anxiety, quell any obsessive thoughts, and help you make better decisions,” writes Caramela. 

TechCrunch

Researchers at MIT have found that commercially available AI models, “were more likely to recommend calling police when shown Ring videos captured in minority communities,” reports Kyle Wiggers for TechCrunch. “The study also found that, when analyzing footage from majority-white neighborhoods, the models were less likely to describe scenes using terms like ‘casing the property’ or ‘burglary tools,’” writes Wiggers. 

Fortune

Researchers at MIT have developed “Future You,” a generative AI chatbot that enables users to speak with potential older versions of themselves, reports Sharon Goldman for Fortune. The tool “uses a large language model and information provided by the user to help young people ‘improve their sense of future self-continuity, a psychological concept that describes how connected a person feels with their future self,’” writes Goldman. “The researchers explained that the tool cautions users that its results are only one potential version of their future self, and they can still change their lives,” writes Goldman. 

Forbes

In an article for Forbes, Robert Clark spotlights how MIT researchers developed a new model to predict irrational behaviors in humans and AI agents in suboptimal conditions. “The goal of the study was to better understand human behavior to improve collaboration with AI,” Clark writes. 

TechCrunch

TechCrunch reporter Kyle Wiggers writes that MIT researchers have developed a new tool, called SigLLM, that uses large language models to flag problems in complex systems. In the future, SigLLM could be used to “help technicians flag potential problems in equipment like heavy machinery before they occur.” 

NPR

Prof. Sherry Turkle joins Manoush Zomorodi of NPR’s "Body Electric" to discuss her latest research on human relationships with AI chatbots, which she says can be beneficial but come with drawbacks since artificial relationships could set unrealistic expectations for real ones. "What AI can offer is a space away from the friction of companionship and friendship,” explains Turkle. “It offers the illusion of intimacy without the demands. And that is the particular challenge of this technology."