Screenshot of Tilt West’s virtual roundtable on Art & AI

AI for Good: Why Artists Are Key to Improving Machine Learning Technologies

T\LT WEST

--

By Sofian Audry, in response to Tilt West’s roundtable on Art & AI

A few years ago, one of my collaborators and I were approached by an AI incubator interested in working with artists. As artists already using machine learning, we thought it might be quite thrilling to work with experts in the field in a business context. So, on a sunny spring afternoon we biked to Plateau Mont-Royal’s head office in downtown Montréal. After we signed the customary NDAs, the employee who welcomed us into the pristine lobby explained earnestly that they were in charge of the “AI for Good” branch of the company. (It turned out they were the only employee in that branch.) Throughout our meeting, I couldn’t help thinking about the employee’s statement. Two questions emerged in my mind: (1) If this company has a branch called “AI for Good” then what does the rest of the company do?; and (2) Why is art associated with the idea of the “good”?

While these questions first came to me with a tongue-in-cheek mindset, they kept returning to my mind as a haunting, running gag. Over time, I came to realize their seriousness, for I believe they both accurately represent the paradoxical nature of current-day AI developments and reflect the subversive potential of art for the future of AI.

The first question reveals the absurdity of the AI tech business world. If tech companies are trying, as they claim, to make the world a better place, then why do they need a special, underfunded branch to do “good”? These companies, of course, generate private goods for their stakeholders. The elephant in the room is that “AI for good” actually promotes cultural acceptance of AI; that is, it deftly correlates cultural acceptability with the idea of the common good. To me, it feels like a “whitewashing” strategy. Yet I recently learned that there was a movement within scientific AI circles for “AI for social good,” led by researchers quite aware of the power dynamics at play, who were nevertheless endeavoring to push for more noble applications of AI.

The second question raises a philanthropic point. Presumably, artists have historically contributed to the common good in ways that, in many countries, justify public support through government funding. Yet I would argue that there is a deeper and much richer sense in which we can think about the contribution of artists to “AI for good.” Whereas the dominant, techno-positive discourse around AI and machine learning is teleological in nature, targeting efficiency and precision through optimization processes, the idea of the common good, which is almost by definition fuzzy and thus difficult to optimize, lies closer in spirit to how machine learning artists work with AI technologies. They do so, not so much for their precision, but rather for their potential to reveal new forms of understanding.

Long before the large-scale industrialization of machine learning, artists were using these technologies as ways to explore new forms of art, music and literature. A case in point is the work of Nicolas Baginsky, who in the early 1990s used unsupervised neural networks for the real-time control of robotic instruments. Baginsky’s The Three Sirens robotic jazz improv band, which was developed over several decades and played all over Europe, embodies this spirit. In designing these robots, Baginsky took a drastically different approach than computer scientists who were training neural nets on musical scores in order to generate new scores of the same genre. Their approach is one Baginsky rejected because it could only create more of the same. Instead, he was interested in his robots’ potential for understanding what music was. He was also interested in what artificially-generated music, created live by out-of-control systems, could teach him about music.

More recently, artist Stephanie Dinkins was driven by an interest in how human communities transfer knowledge through traditions of storytelling. Her installation Not the Only One (2018) consists of a seashell-shaped sculpture featuring figures of three of her family members. The piece invites the audience to pose questions and it responds in the form of generative storytelling. The responses are not predefined. Rather, they are generated live by a deep learning system pretrained on hours of interviews the artist conducted with her grandmother, aunt, and niece spanning three generations of her family. The system was also trained on texts from other sources, such as places where her family members had stayed, books they had read, and source material dealing with blackness and black thought.

Like Baginsky, Dinkins did not have a specific goal in mind when she started her project, at least not one that could be directly optimized. Rather, she was driven by an interest in developing a community-based AI entity built by people of color. She initially thought that the piece would constitute an interactive archive of her family history, but instead found herself engaged in an experimental process with a piece that responded in uncanny, enigmatic, often humorous ways. She found this creation more interesting in its capacity for interpretation.

These artists propose a remedy to the problematic definition of AI as mere tools existing beyond natural and social systems. They embrace both the out-of-control qualities of machine learning and the human-machine relationships they allow to emerge. This “Humility over Control” approach falls directly in line with Joichi Ito’s manifesto against reductionism within AI, in which Ito argues that the future of AI lies not so much in its ability to solve problems or optimize stuff, but rather in “developing a sensibility appropriate to the environment and the time” through engaged forms of participatory design.

The resolute engagement of machine learning artists with algorithmic/material processes is where ethics meets aesthetics, and is what makes artistic practice so critically important for the development of AI technologies. As AI technologies increasingly pervade human cultures, art offers a unique and essential platform to fight against the optimization of human behavior for private profit. By thus nimbly subverting industrial hegemony, artists reveal both the inadequacies of AI and its untapped potential by imagining, through the materiality of AI itself, what “AI for good” could be.

Sofian Audry is an artist, scholar, Professor of Interactive Media within the School of Media at the University of Quebec in Montreal (UQAM) and Co-Director of the Hexagram Network for Research-Creation in Art, Culture and Technology. Their work explores the behavior of hybrid agents at the frontier of art, artificial intelligence, and artificial life, through artworks and writings. Audry’s book Art in the Age of Machine Learning examines machine learning art and its practice in art and music (MIT Press, 2021).

--

--

T\LT WEST

Tilt West is a nonprofit org based in Denver. Our mission is to promote critical discourse focused on arts and culture for our region and beyond.