Skip to content ↓

Topic

Video

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 19 news clips related to this topic.
Show:

Associated Press

Prof. Philip Isola and Prof. Daniela Rus, director of CSAIL, speak with Associated Press reporter Matt O’Brien about AI generated images and videos. Rus says the computing resources required for AI video generation are “significantly higher than for still image generation” because “it involves processing and generating multiple frames for each second of video.”

The Boston Globe

Boston Globe reporter Peter Keough spotlights artist JR’s new documentary “Paper and Glue,” which will be screened at the MIT List Visual Arts Center on Jan. 20. “JR takes on trouble spots around the globe, where he involves oppressed communities in creating the blown-up, immersive photo installations that are his oeuvre and which make a strong case that art can” change the world, writes Keough.

The Boston Globe

The MIT List Visual Arts Center has reopened with three new exhibitions, reports Riana Buchman for The Boston Globe. The new installation includes “Andrew Norman Wilson’s two video pieces ‘Impersonator’ (2021) and ‘Kodak’ (2019); Sreshta Rit Premnath’s sculpture show ‘Grave/Grove’; and, in this era of stops and starts as we lurch from lockdown to reopening, the serendipitously named ‘Begin Again, Again,’ by the pioneering video artist Leslie Thornton.”

Fast Company

Fast Company reporter Mark Wilson spotlights Strolling Cities, a new AI video project developed by researchers from the MIT-IBM Watson AI Lab, which recreates the streets of Italy based on millions of photos and words. “I decided that the beauty and sentiment, the social, historical, and psychological contents of my memories of Italy could become an artistic project, probably a form of emotional consolation,” says Mauro Martino of the MIT-IBM Watson AI Lab. “Something beautiful always comes out of nostalgia.”

VICE

A study by researchers from MIT, Yale and Purdue finds that leaving the camera on during video meetings is a contributor to carbon dioxide emissions, reports Hannah Smothers for Vice. “Just one hour of videoconferencing or streaming, for example, emits 150-1,000 grams of carbon dioxide (a gallon of gasoline burned from a car emits about 8,887 grams), requires 2-12 liters of water and demands a land area adding up to about the size of an iPad Mini,” the researchers write.

HuffPost

A new study co-authored by MIT researchers examines the carbon emissions associated with video conferencing, reports Rachel Moss for HuffPost. The researchers found “just one hour of video conferencing or streaming emits 150-1,000 grams of carbon dioxide.”

Fast Company

Fast Company reporter Steven Melendez spotlights Minglr, an open-source tool that is aimed at connecting attendees at virtual conferences for brief video conversations. “If you’re in a group like a conference, one thing people might do is contact people they know,” says Prof. Thomas Malone. “Other people could contact people they have heard of but not met.”

Fortune

A team of MIT researchers tested several different techniques for action labeling in videos and found that “older, simple, two-dimensional convolutional neural networks—a type of neural network architecture that has been around for quite a while now—worked better than much more complex, models that try to analyze the videos in three dimensions,” reports Jeremy Kahn for Fortune.

Forbes

CSAIL researchers have developed a technique that makes it possible to create 3-D motion sculptures from 2-D video, reports Jennifer Kite-Powell for Forbes. The new technique could “open up the possibility to study social disorders, interpersonal interactions and team dynamics,” Kite-Powell explains.

BBC News

BBC Click reports on a system developed by CSAIL researchers that creates 3-D motion sculptures based off of 2-D video. The technique, say the researchers, “could help dancers and athletes learn more about how they move.”

Forbes

CSAIL researchers have developed an artificial intelligence system that can reduce video buffering, writes Kevin Murnane for Forbes. The system, “adapts on the fly to current network and buffers conditions,” enabling smoother streaming than other methods.   

HuffPost

Oscar Williams writes for The Huffington Post about a new prototype for a glasses-free, 3-D movie screen developed by CSAIL researchers. The prototype "harnesses a blend of lenses and mirrors to enable viewers to watch the film from any seat in the house.”

CBS News

In this CBS News article, Michelle Star writes that CSAIL researchers have developed a method that allows moviegoers to see 3-D movies without wearing glasses. Star notes that the prototype “has been demonstrated in an auditorium, where all viewers saw 3-D images of a consistently high resolution.”

CNN Money

By projecting images through multiple lenses and mirrors, CSAIL researchers have developed a new prototype movie screen that allows viewers to see 3-D images without glasses, reports Aaron Smith for CNN Money. 

Popular Science

MIT researchers have developed a prototype for a cinema-sized 3-D movie screen that would allow users to watch 3-D movies without glasses, reports Mary Beth Griggs for Popular Science. As people generally sit in fixed seats in a cinema, the researchers developed a prototype that “can tailor a set of images for each individual seat in the theater.”