Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Free From Limitations: The Validation of Machine Hallucinations at MoMA

Christian BurkeFollowTowards Data Science--ListenShareSince 1929, the Museum of Modern Art (MoMA) in New York City has served as an art lover’s mecca. It’s a lighthouse that shines a light on avant-garde paintings and sculptures, and since the definition of “modern art” is continually in flux, its collections are, too. Now, this distinguished institution is validating digital art.As the Lead Data Scientist for Refik Anadol Studio (RAS), working in collaboration with Refik Anadol, I’m thrilled to see our work, “Unsupervised,” accepted into the MoMA.At RAS, we bring data aesthetics to the greater public, showing that the potential of AI extends beyond text generation. We live to see the human impact of our art — how it affects people of all ages and backgrounds on an emotional level. It’s a shared human experience, and a highly accessible one.AI-generated art is of course, not without controversy. One of the most widespread misconceptions is that digital art in general and AI-generated art in particular is not legitimate artwork. Yet, even AI-generated art isn’t entirely created by machines. It requires a human touch. As the visionary behind “Unsupervised,” Anadol creates art from raw data. This is new in digital art. Previously, artists who came before him used data to follow a template to produce a facsimile of something that has already been created. Refik’s work is something entirely different.At RAS, I head a team of seven data scientists. My days are filled with supervising, reviewing, and writing code, along with connecting directly with clients and project planning. It might not seem too artistic, but to date, I’ve collected more than three billion images to use as fuel in the AI-generated art fire. Given that my days are filled with the small details of coding and datasets, taking a step back to look at the entirety of what RAS has created is a breathtaking experience.Let me walk you through what it’s like to experience “Unsupervised.” Picture this: You’ve walked into the lobby of the MoMA. It will initially seem as if you’re walking into any other art museum. But, if you take a look around, you’re suddenly struck by the sight of this gigantic screen (24’ by 24’) surrounded by people sitting and standing — all gazing at the exhibit.The exhibit itself constantly moves. It is continually shifting, displaying mesmerizing colors and shapes. What you see depends on which chapter of the exhibit you’ll stumble upon when you enter the MoMA as well as real-time audio, motion tracking, and weather data from the lobby.“Unsupervised” seeks to answer the question, “If a machine were to experience MoMA’s collection for itself, what would it then dream about or hallucinate about?” By combining data from all of MoMA’s collections and extrapolating them to form these machine dreams, “Unsupervised” takes viewers through the history of art itself and projects a spotlight onto the potential future of art.Art sometimes strives to speak to broader societal issues. If you’re looking for one general takeaway from “Unsupervised,” it’s that the exhibit indicates a turning point in the legitimization of AI-generated digital art. MoMA is to the art world what nuclear fusion is for physicists — a sort of Holy Grail. The fact that MoMA chose to display this exploration of how computers process data — how they “think,” create, and hallucinate — serves as validation for Anadol and other digital artists.But not everyone who visits “Unsupervised” is necessarily thinking about machines and their dreams. When you walk into the lobby of MoMA, you’ll see the diverse spectrum of humanity — from little children running around to older people and those from all walks of life — enjoying this intense communal experience. It’s as exciting for me to watch people watching the exhibit as it is for me to look at “Unsupervised” itself. I’ve seen people cry. I’ve seen expressions of joy and love. I’m no artist myself, but I believe it has healing qualities. I also believe that there is art in everything that people do everywhere if only you pay close enough attention to doing something well. There can even be art in writing code.Human artists need technical skills to produce art. They need to understand things like tonal value rendition, perspective, symmetry, and even human anatomy. “Unsupervised” takes the technical aspects of art one giant leap forward by creating a partnership between humans and AI.RAS created “Unsupervised” with data from more than 180,000 works of art at MoMA. Works by Warhol, Picasso, Boccioni, and even images of Pac-Man were all fed into software. We then created various AI models and tested them extensively. After choosing the best one, we trained it to create not just a synthesis of all of the artwork fed into it, but something different.“Unsupervised” isn’t just the sum of its parts; it’s something entirely new. Everything the exhibit creates is original, thanks to our artistic processing.The partnership between humans and machines required new innovations in both hardware and software. Our team faced a number of challenges in creating the neural network required and enabling the exhibit to continually morph its images in real-time, responding to unique environmental factors.One of the challenges was the resolution. If you were to type a prompt into Stable Diffusion, you’d typically get a resolution of 512 by 512 pixels. The AI foundation we used — Nvidia’s StyleGAN — usually serves up a resolution of 1024 by 1024. The resolution of “Unsupervised” is 3840 by 3960, which may be the highest resolution for a neural network that synthesizes images. When you walk into MoMA’s lobby and see “Unsupervised,” you’ll understand why high resolution was important. It brings the art to life, making it seem almost like a living entity that could jump off the screen.The real-time aspect was another significant challenge to overcome. “Unsupervised” produces its machine hallucinations and dreams with a liquid fluidity. These machine hallucinations are born from synthesizing more than 180,000 pieces of art and they take into account real-time factors.A building not far from MoMA has a weather station that collects weather-related data. We’ve fed that data into “Unsupervised,” meaning that whether it’s cloudy, sunny, rainy, or foggy at any given time, the machine incorporates the ambiance of the world outside into its indoor display.Second, the exhibit incorporates real-time data from the viewers themselves. A camera in the ceiling of the lobby feeds data into the machine about the number of visitors and their motions. The machine then considers that data as it displays its artistic dreams.There’s an age-old question: Does life imitate art more than art imitates life? For “Unsupervised,” the answer is clearly both.Even as viewers of the exhibit are emotionally moved by the display, they themselves will influence how “Unsupervised” appears.Similarly, there is a two-way street describing the partnership between AI and humans. An argument could be made that digital art involves the addition of a few extra technical skills to the traditional artistic process. However, I like to think of it as give-and-take.Digital art does indeed involve adding technical tools to artistic processes, such as diffusion models and prompt engineering. On the other hand, the AI itself eliminates some of the barriers required for entry into the artistic world. Let’s say that I like to draw, but I’m terrible at drawing people. AI allows me to bridge the gap by addressing my technical limitations.“Unsupervised” has extended its stay in the MoMA multiple times due to popular demand, and the machine hallucinations could quite conceivably go on indefinitely. Looking forward, I’d love to see even greater legitimization for AI-generated digital art. The models will continue to improve, and hopefully, the technology will become more accessible for everyone to use.AI could be a means of democratizing the art world by enhancing accessibility, but right now, there’s still a technical barrier. I’d like to see AI tools available in simpler, more intuitive interfaces, which could reduce the technical knowledge barrier. One of the new projects we’re working on right now at RAS is web-integrated tools that would allow people to more easily use and interact with AI. That is our primary goal at RAS: to create the means for greater interaction with AI.Since “Unsupervised” required a significant human touch to create, I’m sometimes asked if I think that AI will always require that human touch. At least for the time being, the answer is definitely yes. AI is great at many things, like synthesizing, but it lacks competency in large-scale engineering and innovation.AI-generated art may look creative, but AI itself is not creative. It is, in fact, the opposite of creative. If we want to keep moving forward and making progress in AI and tech in general, we’ll need to rely on ourselves — not machines.—Writer’s note: MoMA provided Refik Anadol Studio (RAS) permissions to use their training data.Christian Burke heads up the data science teams at Refik Anadol Studio, which include AI, Machine Learning, Web, and Web3 development.You can follow Christian on Twitter and LinkedIn.----Towards Data ScienceChristian Burke is the Lead Data Scientist at Refik Anadol Studio, where he oversees the AI, Machine Learning, Web, and Web3 teams.Christian Burke--Bex T.inTowards Data Science--10Zoumana KeitainTowards Data Science--11Christian Burke--The PyCoachinArtificial Corner--27Heiko HotzinTowards Data Science--10The Bold ItalicinThe Bold Italic--126OlaOluwa AdeyemoinBrain Labs--43Sachin Kulkarni--Scott-Ryan AbtinPitfall--131HelpStatusWritersBlogCareersPrivacyTermsAboutText to speechTeams



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Free From Limitations: The Validation of Machine Hallucinations at MoMA

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×