Tagging is not only something we do in order to promote our content or participate in a public debate: it is baked in the very logic of machine learning. When we tag we train algorithms, we form stereotypes that contribute to artificial intelligence. From Google’s controversial misreading of black people as gorillas to Facebook’s automatic image-tagging algorithm, the visual and the cultural are clumsily converging in ways that increasingly demand a renegotiation of the relationship between humans and machines. In his very fascinating book What Algorithms Want, Ed Finn writes about culture machines – assemblages of abstractions, processes and people – and highlights the importance of an experimental humanities approach to the critical analysis of algorithms. Max Dovey has been doing just that: working with stereotypes, tags, and algorithms in live settings, the UK-based artist has tried to define the “hipster” stereotype algorithmically, find out what makes a person male or female in a computer’s eyes, and highlight the human labour necessary for machine learning. I spoke to him about a few of his artworks.
Nicola Bozzi: What is your main focus as an artist?
Max Dovey: I try and raise awareness about the cracks of concern where technological acceleration is faster than our capability to evaluate the system in a social or political way – the gap between anticipation of technology and its eventual arrival and disruption. For example, trying to see how technology actually feels, what is the bodily experience of an algorithmic policy or a crypto mining rig. I work with physical performances and live scenarios, getting people together off screen to encounter algorithmic or non-physical agents in order to examine them and reflect on them critically.
Nicola Bozzi: In your works How to Be More or Less Human and A Hipster Bar you worked with stereotypes and how algorithms interpret them. It made me think a lot about culture machines and algorithmic imagination, concepts developed by Ed Finn in What Algorithms Want. Would you say your work deals with that?
Max Dovey: I relate to this. In How to be More or Less Human the audience are present and see me perform with this image-recognition software that is image-tagging me, while I’m trying to conform to the software’s stereotypes about what it’s been trained to recognise as either male or female. They watch a human performer struggling to be fitted within the confines of an image recognition tag. The piece shows how the tag and categorisation itself are predetermined, but they can also be tricked and fooled by doing simple things like removing a tie or carrying a briefcase – simply giving these decoy objects to the webcam for the data set to be fooled into saying something which is not true, in order to highlight software bias. I feel that the audience experiences the wider social critique when I finally remove my clothes and assume a very vulnerable position, the desperate position of a performer who wants to be recognised as 100% man. Perhaps there is always that urge to be easily classified, easily searchable, Googlable, page-ranked, so we can be tracked and traced and sorted – and remembered, essentially. In regards to Ed Finn’s book I have yet to read it, however I’m not sure about the word “imagination” in the context of algorithms. I think there is a lot of spectacular hype given towards these machine dreams, with people commentating they think this is the visual aesthetic of machines dreaming, but when people say “machinic imagination” I immediately think about the trial and error training used in machine learning (specifically neural networks). I wouldn’t necessarily categorise the processes implied in that as “imagination”, but rather brute force determination.
Nicola Bozzi: What is interesting and what is relevant to your work is the role of humans in this algorithm training process. When I tried the Hipster Bar installation, I was an easy match, but there was a woman with short hair – which is quite a hipster haircut for a woman – and she got 3%. So I was wondering what data set you used, how did you train the algorithm, whether you factored gender in…
Max Dovey: This is a good question. I’ve been in a bit of a dilemma because originally the premise was to choose a cultural stereotype that I thought would be impossible to train an algorithm to recognise. I thought, slightly naively, that my own definition of “hipster” – a term used to describe anyone you didn’t really like – was universal because people would often use it in a negative way. I didn’t perceive the visual characteristics or that there was a visual cultural stereotype. Then I got to scraping Instagram and I accumulated a data set by downloading all images that contained the hashtag “#hipster”. I used the hashtag as a search term to navigate myself through an enormous amount of hipster imagery on social media, but unfortunately there were a lot of pictures of dogs, avocados, and coffee cups. I did not want to bring my subjective interpretation into the process of accumulating training data for the hipster recognition algorithm, however, whenever I showed other people the training data for their thoughts and feedback, people would always have very differing views on who or what counted or could be classified as ‘hipster’. Everyone had their own subjective bias, so in the end I just went back to letting the data decide and not filtering or moderating it too much: as long as the image had a face and was posted using the #hipster tag it entered the initial training sample. I wanted it to be like that, to show how tagging can congregate these stereotypes, but as the project has gone on I have had to do some interventions and curate the data set slightly – mainly to keep the data based on the original Instagram images, because everytime I install the piece somewhere I submit those images back into the data libraries. I occasionally go through and make sure they are still full of human faces, but I don’t spend too much time debating who is or who isn’t a hipster, as the algorithm has already decided and individual opinion is by far too subjective.
Nicola Bozzi: So what kind of sample did you end up with?
Max Dovey: I would say approximately 80-85% of the sample came from Instagram through automatic web scraping scripts, but then I had to manually add some advertising stuff like clothing brands and fashion items. The main problem was I also needed a visual database of non-hipsters to compare every photo to, in order for the image recognition type 2 classifier to work effectively. To work out what it is you have to know what it’s not, so to recognise a hipster you need to have a sense of what a ‘non hipster’ looks like. I had to have a folder of non-hipsters, and that was the really difficult part. Stuff tagged “#nonhipster” on Instagram was in fact, in my opinion, much more hipster than the stuff under “#hipster”. The latter felt a lot more relevant, because people in the West ironically subvert the term by tagging content as #nonhipster, which then yields much better results in a strange way. #Hipster has a more global reach, with more Chinese and Asian and Eastern interpretations of the stereotype that were new to me. Again, that goes back to my naivety, not knowing what the visual characteristics of the term were. Since 2016 I more or less updated the software three times, with new images every time. Now it’s kind of supervised learning: once I save all the metadata from everybody that comes to the bar I have the images saved and added to one of the two folders, according to the choice the algorithm has made. Occasionally I give some moderation, going through the data to make sure they’re still in order. But then I feel uncomfortable because I become the judge, which is not the initial intention of the work and the more I can keep the algorithm a reinforcement of online categorisation the better. Otherwise you might as well have a human bouncer deciding who is a hipster and who is admitted into the bar.
Nicola Bozzi: In How to Be More or Less Human you submit your own body to algorithmic judgement and, quite literally, tagging. Was that process more straight-forward or did you also have to tweak it like with the Hipster Bar algorithm?
Max Dovey: This process is an unintentional side effect of performing with machine learning algorithms. While highlighting a critical concern about algorithmic bias and performing this concern over and over again, you essentially contribute to developing and accidently training AI algorithms. I was originally doing that with a commercial image tagging software service and was fascinated with how the software (in)correctly identified gender. After repeatedly doing the performance, in which my gender is interpreted by image recognition software until it mistakes my naked body as woman and female, my body became ingrained into the database. I had not considered that, since the images I was taking were being sent to the software company via their web API, copies of those images would become training data for other image recognition applications. So now the performance does not exist anymore, but the software has upgraded to version 2 and now it recognises my body as male. That is also why I moved to different topics. In the performance it got harder and harder to show the software’s mistakes, to the point where I wound up contorsioning my body to strange positions in a way that wasn’t interesting or insightful for an audience. With version 1 I was exposing the software, but as the show went on the algorithm had a lot more control, it was getting more familiar with my body, so the punchline of the piece – which was incorrectness – was getting lost. So I ended up just looking really, really silly. It’s good when you have a whole audience highlighting that. So the work helped improve the algorithm to recognise nude male bodies, played a role in helping by defining the tags, introducing the tags, developing and enforcing those tags – which is also interesting in terms of labour.
Nicola Bozzi: This reminds me of another recurring theme in your practice, that is work – you have the music band called The Precariats, the Work, bitch piece… This theme of working through the machine. As an artist who is precarious by definition and also talks about precarity in his work, how do you feel the artist conflates with the precarious worker, and is being more artistic perhaps a way out or to cope for the precarious worker?
Max Dovey: I don’t really know how to answer this question, it’s a big issue. I also don’t know how the categorisation falls into the precariousness of labour, but I have a project that perhaps highlights that: the game show HITs(Human Intelligence Tasks). In that show the audience compete against each other and two teams try and make an image recognition algorithm in one hour. We go through the human labour involved with accumulating the data, categorising the data, and then essentially building an image-recognition application. We started off with this narrative from the Flickr community, when the platform introduced auto-tagging. It’s in the game show, we present it in the beginning. Basically, Flickr got sold to Yahoo! and the only way Yahoo! saw any monetisation of this platform was to start using those images as training data for image recognition algorithms to sell later on. There was a huge debate in the community about the feature, with users really lashing out at the mods because they hated auto-tagging so much and it was making all these mistakes. But Flickr continued to use auto-tagging because they needed the software to improve, so the users could only correct it. Basically they turned their users into cognitive labourers in exchange for a free platform, and some people even made that argument in the comments saying things like: “We get the service for free, so what’s the problem if we are just making a few corrections”. It’s a very interesting swindle, how the web 2.0 companies of the last 10 years have had to manoeuvre into something more profitable, while ads were being centralised by Google and other competing platforms had to go to AI. So we wanted to perform this labour in a show, with people doing it in a fun way, but also highlighting something that happens and is not particularly picked up on. There’s a few rounds: the first is downloading images from Flickr, the second is doing a sentence to describe them, with the audience typing very quickly against the clock, and then the third round is for the developers to look at the sentences and work from there. It’s a bit like Exquisite Corpses, the Surrealist game.
Nicola Bozzi: Why do you focus on the performative act and physical experience? Does experiencing technology together make it different?
Max Dovey: There is a very personal reason why I always use performance and focus on experience design. It’s mainly because my preferred mode of art practice is live, experiential art. Using it to explore critical concerns in technology and culture can sometimes be a bit of a cheap trick, if you’re just highlighting the physicality, taking the fish out of water by printing out the data sets to point out the difference between the material and the immaterial. We try to avoid that. The main thing for me is the interpersonal relations, which become quite absurd. If you were denied access to a website for any reason, you would simply just move on and that automated rejection wouldn’t have been witnessed by anybody. But when you create a space where these decisions can be experienced with a collective audience – again, the Hipster Bar, with other people already in it – it suddenly makes the algorithmic decision-making of non-human agents more public, and these decisions are involved in other areas of life as well. I think it’s interesting to bring an audience together around those interactions to study them and see it’s a bit silly. Even the practice of live coding, for example, people going to an algo-rave – they have different motivations, but they are experiencing the process of working with machinic culture in a live setting. It’s a lot more productive and enigmatic than just making more online content. I’m still a firm believer in the power and potential of meatspace relationships. I think most people would agree too. That’s where I try to situate the art.
Nicola Bozzi: I guess that also has implications in terms of categories, you instantly have a recognition of community – people who have also been rejected, people ridiculing the very distinction, maybe…
Max Dovey: There is something really interesting about the way communities are formed around tagging and categorisation of cultures and taxonomies. It splits the crowd and you end up having these different pockets. I really like what Wendy Chun writes about homophily, people wanting to group and affiliate with people who are similar to them. There is a common interest between using categorisation and tagging for the online platforms that want to smooth the flow of information and our own human nature.
Nicola Bozzi: Do you think technology can help us connect with each other politically, even in a social space like your performances? Is there a political edge to this type of homophily? There has been growing skepticism and criticality about technology…
Max Dovey: I think what we’ve seen in the last few years in Europe and the West is very obvious. A funny point with my work is I think there is no more work to be made about surveillance, nothing to say that hasn’t already been said. As an artist sometimes you say you want to raise awareness of certain issues, but this message has been gotten loud and clear, it’s common knowledge. Facebook was on trial last year, the secrets are out. Instead of working critically with technology I am focusing more time on communities around technology and providing infrastructure to sustain alternatives.