Skip to content ↓

Topic

Machine learning

Download RSS feed: News Articles / In the Media / Audio

Displaying 61 - 75 of 640 news clips related to this topic.
Show:

TechCrunch

Birago Jones SM '12 and Karthik Dinakar SM '12, PhD '17 co-founded Pienso – an AI platform that “lets users build and deploy models without having to write code,” reports Kyle Wiggers for TechCrunch. “Pienso’s flexible, no-code interface allows teams to train models directly using their own company’s data,” says Jones. “This alleviates the privacy concerns of using … models, and also is more accurate, capturing the nuances of each individual company.”

Fast Company

Research Scientist Eva Ponce speaks with Fast Company to explain how AI will impact supply chains. “One of the most common reasons I have seen companies fail when implementing disruptive technologies like AI is when they are rushing, with a lack of clear vision,” says Ponce.

Mashable

Mashable reporter Adele Walton spotlights Joy Buolamwini PhD '22 and her work in uncovering racial bias in digital technology. “Buolamwini created what she called the Aspire Mirror, which used face-tracking software to register the movements of the user and overlay them onto an aspirational figure,” explains Walton. “When she realised the facial recognition wouldn’t detect her until she was holding a white mask over her face, she was confronted face on with what she termed the ‘coded gaze.’ She soon founded the Algorithmic Justice League, which exists to prevent AI harms and increase accountability.”

Fast Company

Writing for Fast Company, Senior Lecturer Guadalupe Hayes-Mota '08, SM '16, MBA '16 shares methods to address the influence of AI in healthcare. “Despite these advances [of AI in healthcare], the full spectrum of AI’s potential remains largely untapped,” explains Hayes-Mota. “Systemic hurdles such as data privacy concerns, the absence of standardized data protocols, regulatory complexities, and ethical dilemmas are compounded by an inherent resistance to change within the healthcare profession. These barriers underscore the urgent need for transformative action from all stakeholders to fully harness AI’s capabilities.”

Fast Company

A new study conducted by researchers at MIT and elsewhere has found large language models (LLMs) can be used to predict the future as well as humans can, reports Chris Stokel-Walker for Fast Company. “Accurate forecasting of future events is very important to many aspects of human economic activity, especially within white collar occupations, such as those of law, business and policy,” says postdoctoral fellow Peter S. Park.

The Economist

Prof. Daniela Rus, director of CSAIL, speaks with The Economist’s Babbage podcast about the history and future of artificial neural networks and their role in large language models. “The early artificial neuron was a very simple mathematical model,” says Rus. “The computation was discrete and very simple, essentially a step function. You’re either above or below a value.”  

Bloomberg

Wardah Inam SM '12, PhD '16 founded Overjet, an AI platform that helps dentists “diagnose diseases from scans and other data,” reports Saritha Rai for Bloomberg. “Dentistry was more art than science, and I wanted to bring technology and AI to help dentists make objective decisions,” says Inam. “We began building and then improving our AI systems with tens of millions of pieces of data, including X-rays, historical information, dentist notes, and periodontal charts.”

Government Technology

Senior Lecturer Luis Videgaray speaks with Government Technology reporter Nikki Davidson about concerns facing emerging AI programs and initiatives. Videgaray underscores the importance of finding vendors, "who are willing to protect the data in a way that is appropriate and also provides the state or local government agency with the required degree of transparency about the workings of the model, the data that was used for training and how that data will interact with the data supplied by the customer.”

Associated Press

Prof. Philip Isola and Prof. Daniela Rus, director of CSAIL, speak with Associated Press reporter Matt O’Brien about AI generated images and videos. Rus says the computing resources required for AI video generation are “significantly higher than for still image generation” because “it involves processing and generating multiple frames for each second of video.”

The Boston Globe

Prof. Daniela Rus, director of CSAIL, speaks with Boston Globe reporter Evan Sellinger about her new book, “The Heart and the Chip: Our Bright Future With Robots,” in which she makes the case that in the future robots and humans will be able to team up to create a better world. “I want to highlight that machines don’t have to compete with humans, because we each have different strengths. Humans have wisdom. Machines have speed, can process large numbers, and can do many dull, dirty, and dangerous tasks,” Rus explains. “I see robots as helpers for our jobs. They’ll take on the routine, repetitive tasks, ensuring human workers focus on more complex and meaningful work.”

Politico

Researchers at MIT and elsewhere have developed a machine-learning model that can identify which drugs should not be taken together, reports Politico. “The researchers built a model to measure how intestinal tissue absorbed certain commonly used drugs,” they write. “They then trained a machine-learning algorithm based on their new data and existing drug databases, teaching the new algorithm to predict which drugs would interact with which transporter proteins.”

The Daily Beast

MIT researchers have developed a new technique “that could allow most large language models (LLMs) like ChatGPT to retain memory and boost performance,” reports Tony Ho Tran for the Daily Beast. “The process is called StreamingLLM and it allows for chatbots to perform optimally even after a conversation goes on for more than 4 million words,” explains Tran.

The Economist

In an article co-authored for The Economist, Senior Lecturer Donald Sull explores the impact of artificial intelligence and large language models (LLMs) on corporate company culture. “Leaders who do adopt AI for cultural insights can use these to make their employees happier, lower the odds of reputational disasters and, ultimately, boost their profits,” writes Sull. “Measurement is not the only piece of the ‘successful culture’ puzzle, but it is a crucial one. Culture has always been an enigma at the heart of organizational performance: undoubtedly important, but inscrutable. With AI, meaningful progress can be made in deciphering it.”

TechCrunch

Corey Jaskolski SM '02 founded Synthetaic, a software company that uses AI to “automate the analysis of large datasets, namely satellite imagery and video, not containing labels,” reports Kyle Wiggers for TechCrunch. “Synthetaic’s technology offers a transformative approach to AI model training and creation, addressing the critical needs of technical decision makers,” says Jaskolski.

MSNBC

Joy Buolamwini PhD '22 speaks with MSNBC reporter Daniela Pierre-Bravo about her new book, Unmasking AI: My Mission To Protect What is Human in a World of Machines, which explores the intersection of AI development and the, “dangers of bias in its algorithmic systems.” Buolamwini emphasizes that: “We need legislation — at the federal level — because the legislation then puts in the guard rails for the tech companies. And also, we need to think about AI governance globally. I do think that all of our stories matter. When you share your experience with AI or your questions about it, you encourage other people to share their stories.”