Skip to content ↓

The creative future of generative AI

An MIT panel charts how artificial intelligence will impact art and design.
Watch Video
Press Inquiries

Press Contact:

Close
Four panelists in a darkened auditorium, sitting in front of a screen showing a purple AI generated work of art
Caption:
Left to right: Panelist Ziv Epstein SM ’19, PhD ’23, multimedia artist and social science researcher; panelist Alex Reben MAS ’10, artist and roboticist; moderator Onur Yüce Gün SM ’06, PhD ’16, director of computational design at New Balance; and panelist Ana Miljački, MIT professor of architecture and director of SMArchS Programs and the SMArchS AD Program.
Credits:
Photo: H. Erickson/Arts at MIT
AI-generated multicolor grid of nine squares
Caption:
The Pilgrimage - Pionorsko Hodočašće. This grid of stills is from six separate video interpolations, produced with a StyleGan3 artificial intelligence system trained on the archival and individual photo documentation of six Yugoslavian memorial monuments. Output is from a post-production experiment with color channels.
Credits:
Image courtesy of Ana Miljacki and the artists, using the StyleGan3 AI image generator.

Few technologies have shown as much potential to shape our future as artificial intelligence. Specialists in fields ranging from medicine to microfinance to the military are evaluating AI tools, exploring how these might transform their work and worlds. For creative professionals, AI poses a unique set of challenges and opportunities — particularly generative AI, the use of algorithms to transform vast amounts of data into new content.

The future of generative AI and its impact on art and design was the subject of a sold-out panel discussion on Oct. 26 at the MIT Bartos Theater. It was part of the annual meeting for the Council for the Arts at MIT (CAMIT), a group of alumni and other supporters of the arts at MIT, and was co-presented by the MIT Center for Art, Science, and Technology (CAST), a cross-school initiative for artist residencies and cross-disciplinary projects.

Introduced by Andrea Volpe, director of CAMIT, and moderated by Onur Yüce Gün SM ’06, PhD’16, the panel featured multimedia artist and social science researcher Ziv Epstein SM’19, PhD’23, MIT professor of architecture and director of the SMArchS and SMArchS AD programs Ana Miljački, and artist and roboticist Alex Reben MAS ’10.

Video thumbnail Play video
Panel Discussion: How Is Generative AI Transforming Art and Design?
Thumbnail image created using Google DeepMind AI image generator.
Video: Arts at MIT

The discussion centered around three themes: emergence, embodiment, and expectations:

Emergence  

Moderator Onur Yüce Gün: In much of your work, what emerges is usually a question — an ambiguity — and that ambiguity is inherent in the creative process in art and design. Does generative AI help you reach those ambiguities?

Ana Miljački: In the summer of 2022, the Memorial Cemetery in Mostar [in Bosnia and Herzegovina] was destroyed. It was a post-World War II Yugoslav memorial, and we wanted to figure out a way to uphold the values the memorial had stood for. We compiled video material from six different monuments and, with AI, created a nonlinear documentary, a triptych playing on three video screens, accompanied by a soundscape. With this project we fabricated a synthetic memory, a way to seed those memories and values into the minds of people who never lived those memories or values. This is the type of ambiguity that would be problematic in science, and one that is fascinating for artists and designers and architects. It is also a bit scary.

Ziv Epstein: There is some debate whether generative AI is a tool or an agent. But even if we call it a tool, we need to remember that tools are not neutral. Think about photography. When photography emerged, a lot of painters were worried that it meant the end of art. But it turned out that photography freed up painters to do other things. Generative AI is, of course, a different type of tool because it draws on a huge quantity of other people’s work. There is already artistic and creative agency embedded in these systems. There are already ambiguities in how these existing works will be represented, and which cycles and ambiguities we will perpetuate.

Alex Reben: I’m often asked whether these systems are actually creative, in the way that we are creative. In my own experience, I’ve often been surprised at the outputs I create using AI. I see that I can steer things in a direction that parallels what I might have done on my own but is different enough from what I might have done, is amplified or altered or changed. So there are ambiguities. But we need to remember that the term AI is also ambiguous. It’s actually many different things.

Embodiment

Moderator: Most of us use computers on a daily basis, but we experience the world through our senses, through our bodies. Art and design create tangible experiences. We hear them, see them, touch them. Have we attained the same sensory interaction with AI systems? 

Miljački: So long as we are working in images, we are working in two dimensions. But for me, at least in the project we did around the Mostar memorial, we were able to produce affect on a variety of levels, levels that together produce something that is greater than a two-dimensional image moving in time. Through images and a soundscape we created a spatial experience in time, a rich sensory experience that goes beyond the two dimensions of the screen.

Reben: I guess embodiment for me means being able to interface and interact with the world and modify it. In one of my projects, we used AI to generate a “Dali-like” image, and then turned it into a three-dimensional object, first with 3D printing, and then casting it in bronze at a foundry. There was even a patina artist to finish the surface. I cite this example to show just how many humans were involved in the creation of this artwork at the end of the day. There were human fingerprints at every step.

Epstein: The question is, how do we embed meaningful human control into these systems, so they could be more like, for example, a violin. A violin player has all sorts of causal inputs — physical gestures they can use to transform their artistic intention into outputs, into notes and sounds. Right now we’re far from that with generative AI. Our interaction is basically typing a bit of text and getting something back. We’re basically yelling at a black box.

Expectations

Moderator: These new technologies are spreading so rapidly, almost like an explosion. And there are enormous expectations around what they are going to do. Instead of stepping on the gas here, I’d like to test the brakes and ask what these technologies are not going to do. Are there promises they won’t be able to fulfill?

Miljački: I am hoping that we don’t go to “Westworld.” I understand we do need AI to solve complex computational problems. But I hope it won’t be used to replace thinking. Because as a tool AI is actually nostalgic. It can only work with what already exists and then produce probable outcomes. And that means it reproduces all the biases and gaps in the archive it has been fed. In architecture, for example, that archive is made up of works by white male European architects. We have to figure out how not to perpetuate that type of bias, but to question it.

Epstein: In a way, using AI now is like putting on a jetpack and a blindfold. You’re going really fast, but you don’t really know where you’re going. Now that this technology seems to be capable of doing human-like things, I think it’s an awesome opportunity for us to think about what it means to be human. My hope is that generative AI can be a kind of ontological wrecking ball, that it can shake things up in a very interesting way.

Reben: I know from history that it’s pretty hard to predict the future of technology. So trying to predict the negative — what might not happen — with this new technology is also close to impossible. If you look back at what we thought we would have now, at the predictions that were made, it’s quite different from what we actually have. I don’t think that anyone today can say for certain what AI won’t be able to do one day. Just like we can’t say what science will be able to do, or humans. The best we can do, for now, is attempt to drive these technologies towards the future in a way that will be beneficial.

Related Links

Related Topics

Related Articles

More MIT News