Machines, just like humans, understand what they see, hear or perceive according to prior knowledge (which acts as a form of filter). This concept prevails in the exhibition The Question of Intelligence: AI and The Future of Humanity, which should have run from the 7th of February to the 8th of April at the Sheila C. Johnson Design Center at Parsons/The New School in New York. It engaged with the relationships that humans sustain with machines and vice versa and brought insights on artificial intelligence’s bias and potential in unlocking new opportunities. The show closed early due to COVID-19 confinement measures but should have ended this week. We take a look back at its content to extend the experience online.

The variety of media strikes as we enter. Pictures on the walls, sculptures and interactive installations inhabit the open plan. Curator Christiane Paul first takes viewers on a journey to discover artificial intelligence from a sensory perspective. We start with image recognition or the act of seeing with two works from Memo Akten‘s ongoing series Learning to see (2019). The artist trained five AI algorithms with different data sets so that each artificial neural network would interpret images from a unique angle. Wipes, cables and other household items turn into waves, flowers, and flames. The result is poetic.

Not long after do we recognise how image recognition algorithms carry many socio-political discrepancies due to flaws in the collection and organisation of data. In Us, Aggregated 2.0 (2018), Mimi Onuoha built a patchwork of photographs combining a picture of her mother at home with other images sourced through Google’s reverse image search engine. The collection gives a sense of community while presenting unrelated individuals. Similarly, Lior Zalmanson offers a quip on image categorisation. For his series Image May Contain (2019), he brought together Facebook images that answered to the same classification as emblematic historical photographs. The outcome is unambiguous. Martin Luther King Jr.’s Dream speech is tagged as “1 person, standing, sky, crowd, outdoor,” exhibiting the same characteristics as a gay pride march and Donald Trump speaking at a rally, and John F. Kennedy’s parade, minutes before his assassination, stands for “10 people, car.” Combining historical and everyday images with corresponding tags into lenticular prints, Zalmanson highlights how, suddenly, our collective memory converts to disinterested keywords, just like any other mundane picture. No more, no less.

The exhibition moves on to consider AI’s relation to speech. We discover how artists train algorithms on texts and oral statements so that systems build a personality, moods, memories, and narratives. Lynn Hershman Leeson shaped her memorable Agent Ruby (1998-2002) to resemble Tilda Swinton’s character in the 2002 movie Teknolust. We learn in the archives how the chatbot agent discussed certain topics with online users throughout the years and refined its unique traits to become the persona we interact with today. Likewise, Stephanie Dinkins fed her AI storyteller Not the Only One (2018) with thoughts from three women of one African-American family. The interactive sculpture conveys the mindset of an underrepresented group while answering biographical questions offline.

At the heart of the exhibition space, David Rokeby‘s pioneering work, The Giver of Names (1990), synthesises image recognition and language processing. After we pick a selection of objects of different shapes and colours to display on the installation’s pedestal, the system works to identify them based on previous learning. Through associations of words, ideas and concepts, it builds connections amongst the objects and creates a narrative to link them together. A screen displays the AI’s mindset so that viewers can observe both the algorithm’s logic and its creativity. This piece acts as a transition point, introducing the second section of the show dedicated to creativity and labour. Hence, we stop looking at humans’ impact on AI and its sentience and focus on the knowledge that this technology brings to people instead.

Creativity is a field where AI can be an excellent companion. We see that artists can, for instance, use voice input to have an AI draw elements in the form of a mind map, as is the case with Baoyang Chen et al. ‘s AI Mappa Mundi (2018-2019). This interactive map generator shows imagination in building absurd compositions alike those by Dadaists. Similarly, Harold Cohen also reveals the creative dynamics between human and machine as he collaborates with his artmaking assistant AARON (1973-). The pioneering AI software accompanied the artist throughout his career and evolved to refine its production means, drawing and painting techniques, and use of colours. In the show, AARON produces a new artwork every 10-15 minutes. Next to it, Mary Flanagan‘s [Grace:AI] (2019) consists of portraits of Frankenstein – the famous character of Mary Shelley’s eponymous 1818 novel. The artist produced this series using Generative Adversarial Networks or GANs, a technique that earns substantial visibility these days. She trained her machine with a dataset of paintings from female artists. This vision informs the AI’s personality, thus counteracting the male-dominated fictions in the creation of gruesome humanoids.

We now head to the other central installation of the show, Tega Brain‘s Deep Swamp (2018), a US premiere. The work consists of three terrariums of wetland plants governed by artificial agents. The three computational systems demonstrate a competitive knowledge in administering objective parameters such as light, water flow, fog and nutrients. But, again, these AIs also have individual traits. Nicholas wants to be original and raise attention, Hans builds a natural-looking wetland, and Harrison produces a work of art. As a result, the swampy territories look somewhat offbeat, which leads to asking what purpose should guide processes of optimisation, notably in the context of climate change. Likewise, Ken Goldberg et al.’s AlphaGarden (2020 -) assigns artificial intelligence to improve ecological response. A deep learning software, trained on simulation and human demonstrations, advises a robot on how to sustain a garden at the University of Berkeley. We discover the progress in real-time on a screen and explore the pitfalls and favourable circumstances that the machine encounters when developing its polyculture.

The last segment considers the impact of artificial intelligence on our digital economy. In Truckers (2020), Brett Wallace offers a vision of the working conditions of truckers now that AI is built into their cabs. Meanwhile, his other piece on view, Future of Works (2018), presents a collage of data on the different professions that drastically changed because of new technology. We finish the show with a highlight, LarbitSisters‘ BitSoil Popup Tax & Hack Campaign (2018), which won the Golden Nica for Interactive Art at the 2018 Prix Ars Electronica. The AI system, trained on IBM’s AI-Watson Natural Language Classifier, presents an alternative taxation system that pays back Twitter users for the data they produced. Ultimately, the exhibition delivers a fully rounded view of the human-machine relationship at present times. We reconcile machine learning with principles of sentience and individuality and leave the show seized by the growing interdependence of AI and people. Of course, such exciting content calls for more questions, which, gladly, we got to ask curator Christiane Paul.

Marie Chatel: Could you tell us more about how you arranged the works? Did you face exciting challenges when organising the show?

Christiane Paul: The works in the show are arranged in groups and sequences that reflect on the automation of our senses and the effect it has on creativity and labour. Walking around the gallery, you encounter works reflecting on the automation of vision, speech, and knowledge — which all happen to be major areas in which AI is developed by corporations — and projects that create artistic AIs. The use of creativity and AI is then linked to pieces that try to control and optimise natural environments, as well as artworks that explore how AI has affected work environments or how it could be used for alternative models for a digital economy. I wanted to create an environment in which the audience encounters algorithms that talk to them and each other. As you entered the exhibition space, you would hear several works quietly speaking, going through iterations and trying to make sense of what they perceived.

One of the aspects I like about curating digital art is that it always poses interesting challenges due to its technological interfaces or interactivity or reliance on networked platforms and real-time, participatory nature, among others. As many other digital shows, The Question of Intelligence had to be installed in ways that encouraged interaction with artworks — be it through physical placement or instructions. The most challenging pieces proved to be Tega Brain’s Deep Swamp, which required taking care of three “wetlands” by regularly adding water for the plants with a hose that was hidden behind the gallery wall and threaded through the reveal as needed; and LarbitsSisters’ Bitsoil Popup Tax & Hack Campaign, which made its debut in the US and required electrical rewiring from European to US voltage. The water for the misters embedded in the server racks was another challenge since it needed to be filtered to avoid that the fairly hard water in New York City wouldn’t create harmful calcium residue for the inner workings of the installation.

Marie Chatel: The exhibition asks about the intelligence of artificial networks versus our very own, notably in terms of sentience. In what ways should we expect human and machine’s intelligence to be complementary?

Christiane Paul: It’s a great and very challenging question that is also underlying the exhibition itself. I don’t think there is an easy answer to it, and the first step in trying to give one is to figure out how human and machine intelligence differ, which is one aspect the show highlights. Machines are “intelligent” in that they can process and detect patterns in data sets much faster than humans ever could. However, that processing depends on us humans creating data sets that are ethically sound, diverse, and inclusive, which is a continuous struggle. A lot of bias in machine learning arises from data inequalities. As some of the works in The Question of Intelligence show, AIs are notoriously bad at understanding historical contexts and complex cultural nuance, which they need to learn from humans. I’m not sure if it will ever be possible to achieve the elusive goal of an artificial general intelligence that can perform any intellectual task a human can handle. Humans are able to negotiate competing perspectives on incredibly complex tasks and seamlessly switch between them while evolving thoughts. AIs currently can’t, and the big question is, why should they even be asked to do that? Human and machine intelligences complementing each other seem to be a more fruitful approach.

Marie Chatel: The show gives a lot of importance to the misunderstandings that algorithms perpetuate, which is excellent in educating the public. There’s also a strong history of artists discussing ideas with tech companies. Are some of the artists in the show collaborating with research labs to implement more diversity and equity? Could you elaborate on the influence artists can have or already have on the practical use of AI in tech environments?

Christiane Paul: Over the decades, tech companies have increasingly opened up to artists or even actively commissioned artwork since they realised that art can engage in experimental explorations of technologies not only from aesthetic, philosophical, and humanist but also ethical and political perspectives, providing a critical reality check or opening new doors for further development. Lynn Hershman, for example, has worked with research labs and tech companies on many of her pieces, such as The Infinity Engine, for which she created a functional replica of a genetics lab in collaboration with well-known scientists. Stephanie Dinkins has been very engaged in discussions around racial diversity and equity in AI and has created AI.Assembly, which started as meetings at NEW INC, the art and tech incubator at the New Museum in NYC, in 2017 and has continued at other institutions and organisations, among them the Data & Society Research Institute. Lior Zalmanson whose AI projects have specifically focused on algorithms developed for people with disabilities, has also been working with the Data & Society Research Institute and has had conversations with Google about their captioning software.

Marie Chatel: You featured several works of pioneers in the use of artificial intelligence. Could you tell us more about precursors such as Harold Cohen, David Rokeby, and Lynn Hershman Leeson, and their role in building thoughts around the use of AI? How would you say artists now build on this legacy?

Christiane Paul: It was important to me to include these pioneering works since AI art has only recently gained more mainstream attention and many people believe that artistic exploration in this area is a very recent trend, so the early works are often forgotten. Harold Cohen’s AARON, which he started creating in the late 1960s, was the first AI painting or drawing system and, while the underlying code may not be that sophisticated from a programming perspective, the project laid the groundwork for all the current experimentation with AIs as artists. Harold interestingly was more invested in the potential for collaboration between human and machine. David Rokeby’s The Giver of Names, in my opinion, has aged incredibly well and still is as intriguing as it was 30 years ago. While David might decide on a slightly different and slicker set-up for the work if he would recreate it today, it remains unique in presenting the transition process from real object to imaged object to associated ideas and metaphors, reflecting on perception and language. Given how used we are to chatbots such as Siri and Alexa, Lynn Hershman’s Agent Ruby, conceived in the late 1990s, may not necessarily surprise us today as it would have twenty years ago but Lynn contributed to taking chatbots to the next level. Ruby is still interesting as a conversational bot that is more attuned to learning about culture than responding to your requests to play certain music, lower the light, or tell you how long to cook your dish.

Marie Chatel: The show also looks at the current use of artificial intelligence for creative purposes. When it comes to artworks produced with generative adversarial networks, visual styles might appear homogenous to some of us. Where do you place expectations in artificial intelligence as a tool?

Christiane Paul: The use of generative adversarial networks (GANs) in creating artwork has recently emerged as a trend, which even led to the coining of the term GANism. A Generative Adversarial Network (GAN) uses generative algorithms trained on a specific data set to produce new original images with the same characteristics as the training set. They are then evaluated by discriminative algorithms that, based on their own training, judge whether the newly produced data looks authentic. While I wouldn’t say that the visual style of GANism is very homogenous — although there is a certain blurry aesthetic to the way an AI learns to paint or draw — the conceptual framework for that type of art certainly has been fairly standardised: use a training data set to make an AI that paints like a Renaissance artist or abstract expressionist or you name it. I think this approach fairly quickly exhausts itself, and the question is why we should be interested in AI replicating well-established art forms or creating work that caters to a common denominator in what audiences perceive as aesthetically pleasing. On that level, I would have low expectations for AI as a tool. I have much higher expectations when it comes to the use of AI for collaborating with a human artist or exploring perspectives a human could not have or reflecting on AI’s inherent capacities. Mary Flanagan’s [Grace:AI], for example, has used a GAN trained only on works by female painters, which is a perspective that no human and only an algorithm exposed to a specific slice of art history could have. AI Mappa Mundi, created by a Chinese collective, specifically explores how one could code an AI with “imagination,” in this case by infusing a dose of Dadaism in the code. I think AI can be an extremely interesting tool for reflecting on its own state of existence and thereby on what it means to be an algorithmic rather than human being.