Never more than today human beings live in symbiosis with technologies that he created himself. Today as in the past, technologies are accurate mirrors of their societies, of human desires, fears and needs.
We were in talks with Clément Lambelet, who handles issues about the relationship between society and technology, control ideologies and Artificial Intelligence.
Clément Lambelet (Geneva, 1991) is a Swiss artist; he has a Bachelor of Art in Photography from ECAL, where he actually works as an assistant. He was selected inside Foam Talent 2017 with Collateral Visions, an investigation on machine vision and how it shapes our world’s perception. His first book “Two donkeys in a war zone” was nominated for the Author Book Awards at the Rencontres d’Arles 2018 and his work was exposed in important institutions such as C/O Berlin, Frankfurter Kunstverein and Musée de l’Elysée of Lausanne.
Bianca Cavuti: Let’s start from your last artwork, Collateral Visions. Can you talk a bit about this artwork? Where did this interest come from?
Clément Lambelet: I started Collateral Visions in early 2016 after creating two projects around images of military drones: Find Fix Finish and Two donkeys in a war zone. The different researches I have carried out for these series have led me to realize that a new paradigm is emerging in the way we observe humanity. This new vision is first of all linked to the proliferation of new artificial vision technologies that have developed over the past thirty years. But above all, it was a social and political change that took place, with the rise of a need for population control. At the same time, we can see that the vast majority of people accept increasingly invasive surveillance systems in their daily lives. Even the vocabulary has changed: in France “video surveillance” has become “video protection”.
This semiotic shift is revealing of this attempt to make these systems widely accepted. I needed to open up this approach to a vision of the humankind, to explore these new territories produced by control tools, those that we find in our daily lives. How do these machines perceive us? What do they reveal about our singularity? What does this say about our society that created them?
After opening this path, I realized that automated vision is now found in so many different forms that it would be difficult, if not impossible, for me to create a single monolithic project. Collateral Visionsis therefore a constellation of different pieces and series. From facial recognition with portraits made with the Eigenface algorithm, we move on to technological faith with the diptych Adam and Eve. From the smiles and cries of Happiness is the only true emotion, we reach the witnesses of drone strikes of the series Two donkeys... I really like this principle because it allows me to nuance as well as to make the subject more complex.
Bianca Cavuti: I was impressed by a part of Collateral Visions, the video called A Distant Encounter, that explores the value and the ambiguity of the gaze in the context of the new military (and not) observation technologies. The final look of the farmer is very powerful, and raises an issue about the logistics of perception and the hierarchy of vision. What are the reflections this work proposes?
Clément Lambelet: The video A Distant Encounter is a more emotional response to machine vision and control ideologies. During the research that led to Collateral Visions, I felt anxiety, and a certain fear, about the deployment of these tools which I believe has two sources: first, the fact that technologies that were reserved for conflict are increasingly being used against civilians. We are not yet in a situation where a combat drone will strike civilians in Europe, but the potential is there.
The second source of anxiety is the short time it took for these control technologies to become widespread. I discovered with surprise when I made the series of Eigenface portraits that we can trace the invention of contemporary mass surveillance back to this algorithm. It was invented in 1991, only 27 years ago! For me, this raises a dual question directly: first of all, the issue of hindsight. Have we taken enough time to understand the impact of these tools? But above all, what does the future hold for us, given everything that has been developed so quickly.
This anxiety is very clearly expressed in the expectation, the suspense in a sense, that A Distant Encounter contains. Will the sniper shoot his target? But the fundamental question I am trying to raise with A Distant Encounter is to show the asymmetry of power that control systems generate. As Grégoire Chamayou explains in “Drone Theory”, a new conflict model has emerged over time. In the context of the military drone, there is a total imbalance between the different combatants. The drone pilot no longer exposes his body to combat although he is directly involved in it. It is no longer a “classic” war, but a “state of violence” whose legality can be questioned.
Opposite, the alleged terrorist has only his body to offer. We are facing a total antagonism between the drone strike and the suicide attack. But above all, it is an unshared conception of death. As Chamayou says, “each one is both the antithesis and the nightmare of the other”. There are people under surveillance and overseers. A Distant Encounter places us in the position of the one who has the power to observe, but without being able to act, by being only the witnesses of this encounter. And this uncomfortable position, like the farmer’s astonishing look at the end of the video, reminds us of the need to rebalance contemporary vision tools.
Bianca Cavuti: Today the visibility related to the proliferation of new and sophisticated technologies is a fact and is likely to turn into a one-way trap. What does your work tell about control society and surveillance issues?
Clément Lambelet; One of the central points is the principle of dehumanization that all these technologies generate. We find it very clearly in Happiness is the only true emotion. For this project, I used a database of portraits of different actors who play the so-called basic emotions: anger, disgust, fear, happiness, surprise, sadness. These portraits, initially created for psychological researches, have then found a new use in the creation of algorithms for emotion recognition. I proceeded in two phases: first of all, by creating a subjective treatment on the very materiality of the images in order to make this surface more fluid, shifting and expressive. The emotion is probably there, but where exactly is it in the image?
In the second phase, I submitted the 184 portraits I had made to an emotion recognition algorithm created by Microsoft simply to see how precisely the algorithm would work. It only recognized happiness with certainty. For the other emotions, it is either a clear error or a mixture of contradictory emotions. We could focus on the issue of the mistake produced by the algorithm, but the essential mistake is to think that we can reduce human emotion to a series of absolute numbers. A lot of people have smiled during difficult times in their lives, what would an algorithm understand about this human complexity? This ideological limit that affirms that humanity can be reduced to mathematical abstractions seems to me to be one of the major threats posed by current technologies. Especially since this ideology, in the form of algorithms, be it artificial intelligence, is presented as an objective and neutral truth although it is always oriented, guided.
Bianca Cavuti: You call yourself a visual artist and an algorithm hijacker. Can you tell us about your artistic practice and your poetic strategies?
Clément Lambelet: I still have some difficulty defining my practice precisely, probably because it is far from being determined in a rigid framework. I have a background as a photographer, but I no longer limit my projects to photography only. Similarly, I don’t necessarily need to create pictures myself. If I can find a stronger visual source than the one I could make that fits my purpose, I would rather appropriate it than try to produce an image that will surely be weaker.
Defining myself as a visual artist doesn’t say much about my practice. That’s why I added “algorithm hijacker”. This is quite vague, but it allows me to give a contour to my work, to associate it with algorithms and therefore with technology. The latest technological inventions, the new uses of AI, do not fascinate me for themselves, but for what they reveal about humanity, its fears, its desires, its doubts… I think that the center of my practice is above all the human being, the society he develops, his relationship to the other and to the world. The use and diversion of technologies is only a way for me to make visible and question ideas that are ultimately often much older than the technologies that support them.
I have developed different strategies to realize my projects. The first was born from the “invisibility” of current control tools. Of course, we can see some of them in our daily lives, but the power of technology is rarely found in the tool directly. To paraphrase Brecht, who has been a great inspiration for my projects: a photograph of Google’s or Facebook’s datacenters tells us almost nothing about these institutions. The strategy has therefore always been to show the results of the machine vision, rather than the machine itself. For example, for the diptych Adam and Eve– made with a millimeter wave body scanner used in airports – I had no interest in showing the machine directly.
This work carries a second important process: it is to divert the usual purpose of technology to make it my own use, my own vision. In this case, I wanted to highlight the quasi-religious relationship that most of us have with control tools. We fully accept the ritual of being electronically stripped by a machine, evidence of our purity, with the same faith as a religious devotee. To make this connection visible, I staged myself in this scanner with a friend by taking up the classic pose of Adam and Eve engraved by Dürer in the 14th century. These echoes to history make it possible to invoke in a subtler way an idea of a long-term time, to inscribe current ideologies in the same vein as more ancient ones.
Finally, one last approach covers the whole of Collateral Visions, it is the principle of networking the project pieces, of their constellation. I have always been fascinated by atlases, including Gerhard Richter’s, and the potential for associations it offers among all the elements that compose it. It seemed important to me in Collateral Visions to adopt this method. I also included documents, data, press articles. Gathered on a table, they form a kind of caption of the images, a map to go through the project and offer other angles of understanding.
Bianca Cavuti: A controversial application of the AI research is the attempt to develop algorithms able to classify and recognize specific patterns in subjective fields, such as emotions or creativity. What do you believe are the threats and the promises (if any) of this kind of approach?
Clément Lambelet: In my opinion, subjective fields, such as emotions and creativity, but also the risk of recidivism, the risk of committing a crime, insurance premium, etc… are the most dangerous areas in the application of artificial intelligence. We are being presented with AI as the miracle solution to solve all of humanity’s problems, but it must be remembered that “AI” is a term whose meaning has been greatly exaggerated. This “intelligence” is limited only to finding patterns in a large amount of data, and this raises the problem of bias, the examples of which are legion: discrimination against women, blacks, poors… These systems replicate and amplify current inequalities. It is a mirror of our society.
But as Julia Powles pertinently argues, researchers themselves have a problem with the way they hope to solve these algorithmic flaws. It won’t be through a better data filter, a larger collection, more personalized, that the bias will be reduced. “Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate”. It exists before data collection. Unfortunately, it remains in the roots of our society and it takes us away from the real question: “The endgame is always to “fix” A.I. systems, never to use a different system or any system at all”. Wanting more technology to solve the problems of technology, that’ s how absurd it is! I hope that one day we will be able to really ask ourselves what role we want to give to these systems.
Bianca Cavuti: In some of your artworks, for example, Two Donkeys in a war zone, you seem to seek remnants of humanity inside increasingly dehumanizing technologies. Where did the artwork come from and what are the possible strategies of visual resistance to this situation?
Clément Lambelet: I started this series while working on Find Fix Finish which like Two Donkeys… is a piece about drone strikes. This series, published as an artist’s book, focuses on confidential PowerPoints from the US military that “The Intercept” made available in October 2015. I started from these documents, which I cropped to extract the visual elements that make up the iconography used by the army. This is a very colourful aesthetic, where the reality of the conflict, the strikes of the drone, is depicted by graphics, statistics, and inoffensive explosions. The series shows, in this conception of the “art of war”, the need for efficiency, the economic context that the army deploys.
While making Find Fix Finish, I realized that these documents were only a partial view of the complex reality of combat UAVs. In order to understand more fully what a strike is, I simply searched on Youtube for “drone strike”, dozens of videos appeared. One of the first I saw shows a US army attack on an Isis camp. The drone’s infrared camera films the scene and between two explosions briefly frames two donkeys.
This rather unexpected irruption led me to conceive these videos differently. They show much more than just the simple strike. We also find in the frame animals, passers-by, witnesses who exist in the recorded image and therefore suffer the systemic violence of the drone. I methodically searched in most of the videos I could find on Youtube or Liveleaks for all these traces of life. And while photographing my screen I focused on those “details” at the edge of the frame that the drone pilot doesn’t consider. I tried to show life before death, life besides death. This way of diverting, reframing, the vision process of the camera allows me to question through the image the underlying principles of these strikes. It is a visual resistance as a way of offering a new space, to highlight a humanity that is undergoing dehumanizing external technology.
Another point I wanted to raise is the double limit of vision, that of the system and that of the spectator. Artificial vision is often described as much more precise and vast than human vision, and drone strikes are filmed with thermal cameras that allow to see the heat that a body gives off. And even if the majority of the videos of strikes have a very low quality because they are deliberately reduced before being put online, numerous edifying testimonies from pilotstell in detail the many defects of this system. To make this perception limit visible I have chosen to show also “poor” images, without quality, strongly pixelated, but which always contain a trace of life. We come up against another limit: we have to accept that an image does not have to be “beautiful” to be strong and essential. Its essential poverty is, as Hito Steyerl claims in her “In Defense of the Poor Image”, a fundamental testimony to its condition, origin and dispersion.
Bianca Cavuti: What do you think the risks of delegating the human gaze to seeeing machines are? Do you believe that this substitution of looking encompasses also beneficial potentialities?
Clément Lambelet: Artificial vision has advantages, of course. What NASA or ESA does on understanding our world is significant. More closely, the drone in its use by the general public or artists has made possible the emergence of a new way of seeing the world. A drone is an interesting tool, as long as it remains benign, harmless. From the moment one equips it with a Hellfire missile and the ability to track his target for hours, it becomes the agent of a remote murder system. In my opinion, we cannot criticize a technology for what it is, we must always be linked to a specific application. I think, moreover, that one of the biggest problems is not so much artificial vision, but rather its application on a very large scale. Image thinkers have felt a profound change with the advent of the digital image, and have focused mainly on its social use.
But as Trevor Paglen powerfully states in his essay “Invisible Images (Your Pictures are looking at you)”, these researchers have missed a central point of the digital image: it is essentially and above all readable by a machine, long before it can be seen by a human being. This fundamental change explains why we can nowadays create artificial intelligences that recognize objects and faces, only because these images, the billions of images posted daily on Instagram, Facebook, Snapchat, can be identified by a computer. And this reading, this networking within databases, is not innocuous. Today, it can have an impact on our lives, the price of our insurance premium, our ability to fly… All this under the guise of good intentions: “Do the right thing” (Google), “Bring the world closer together” (Facebook). But as Paglen points out, these tools are now “immensely powerful levers of social regulation that serve specific race and class interests while presenting themselves as objective”.
Bianca Cavuti: It is Trevor Paglen himself who, talking about Artificial Intelligence and seeing machines in an article of 2017, says that artists must have a main part in this debate, because anyone better than they can understand and decipher images (and, he continues, so as not to leave these matters to “guys in Silicon Valley.”). What do you think the role of art in this discussion must be? You’re assuming that artistic production and social activism can coexist? Do you believe there’s something like art activism, as defined by Boris Groys as “the ability of art to function as an arena and medium for political protest and social activism”?
Clément Lambelet: I don’t believe that my creations are the medium for political protest, at least that’s not what I’m after in the first place. As I understand and sometimes feel the need for stronger activism, I also fear that art could become an ideological tool similar to those I am trying to criticize. However, social-political issues remain at the heart of my projects. My artistic practice is political in the original sense of the term – which is interested in the life of the city, the organization of power – because my main concern is the human being, the society in which he lives, the emotions that this society generates in me. In my opinion, it is a question of holding a mirror out to the unreadable and obscure areas of our society, making it visible without giving an obvious side to be interpreted. Moreover, the ideologies I criticize are rarely the result of one single political direction. It is much more important, in my opinion, to criticize more widespread discourses and practices, such as control, than to focus on a specific policy event and give its interpretation.
This allows a certain distance that seems fundamental to me to offer autonomy to the works I create. But this autonomy does not mean that there is no meaning or issues. Through these creations, I hope to make people more attentive and critical of the new control systems. In the end, I think that art must keep a distance that allows its autonomy, but artists must participate in the important discussions of their time, open invisible doors that are not yet considered. This is what I try to do during talks, also by proposing concrete solutions to the public, such as those proposed on privacytools.io.
Returning to Paglen’s statement, he raises the issue of the artist’s place in his epoch. It is not so much our skills that make us so capable of criticizing current issues, it is our position, our necessity. I don’t work for a multinational, a government but, this position is not one of withdrawal either. Through this position aside, I can raise criticism, question the devices and uses that contemporary image-machines generate. It is a position, a privileged one, that necessitates for me to act artistically. Silicon Valley engineers, under the guise of good intentions, do not see the monsters they are creating, or the pursuit of profits makes them blind. On the other hand, the public is only rarely aware of the use of digital images which no longer merely represent life, but have a direct impact on it. At this critical time in history when power is strengthening its authority through the use of images, we can no longer accept that the blind are leading the blind.