Imagine a public library. A very ordinary one, with shelves and shelves full of books grouped by genre and ordered alphabetically. The way this space is organised is being challenged by new forms of processing of information. In recent years, libraries were especially interested in applying computational methods for sorting, organizing and managing their inventory. But the appearance of optimized resources might have other implications.

The tedious task of sorting through large collections of books can be delegated to a weeding software that logs book circulation and choses which books the library should get rid of. Books that haven’t been checked out for a certain amount of time will be discarded. If, for example, a copy of Victor Hugo’s Hunchback of Notre-Dame would not be read by anyone for two years, it would be thrown away. Relying so strongly on circulation, the system bypasses the judgment of librarians, who have a more profound understanding of the value of certain books.

In a small public library in Sorento, Florida, two employees decided to confront the system their library was using. They created a fictional patron called Chuck Finley and used this account to check out over 2500 books in nine months, leveraging book ratings in the system. In this way, they were able to rescue certain books they considered valuable from automated purges of low-popularity titles. The problematic relationship between an algorithm and human librarians was therefore resolved by human cunningness. When this activity was discovered, officials at the East Lake County took measures against these employees, because their imaginary reader’s appetite altered the library’s circulation rate by 3.9 percent, which could have fraudulently increased the branch’s funding.

As society embraces computational tools as media of expression and decision making, the way we navigate the internet is increasingly processed and shaped by algorithms. They identify and anticipate what we need to know in our information ecology. They filter and manage the use of communication channels much like unpleasant design[1] does in the urban realm. But what exactly can be considered unpleasant in the operation of an algorithm? Or rather, are there design patterns in the digital realm which could be compared to unpleasant design in public space? In the rest of this text, we will look into the design of algorithms that we interact with on a day-to-day basis, from online purchases to social networks or surveillance systems. These systems appear to be part of the public sphere where we can freely exchange opinions, but the way operate is closer to theme parks and shopping malls.

Unpleasant Design – Objects in Public Space

Every design process, both from a technical and from an aesthetic perspective, is ideological. Hence, its features, structures or methods of operation, in which a user interacts with it, can be controlled. In the book Unpleasant Design, we have been able to identify and document a certain language of unpleasantness. This language comes out of a design approach that transmits a particularly narrow use of the object. It is a set of discouragements and prohibitions, maintained and promoted by silent agents.

Unpleasant design implementations range from architectural interventions in public space – such as bumpy surface which prevents rough sleeping, garbage bins which promote city cleanliness by means of the inclined surface, or the lights that make drug use in public toilets impossible, and highly dangerous, by rendering the veins hardly visible. The most common example is certainly the design of urban furniture which discourages sleeping, while still providing some form of sitting. The most extreme, on the other hand is perhaps the one which connects the digital and physical in an uncanny way: anti-suicide nets used in Foxconn facilities where a large portion of our consumer hardware (iPhones, Kindles, Nintendos, Xboxes, etc.) is produced. The nets are installed above ground level to catch potential suicides and make their attempt futile.

Unpleasant Design is not about the failure to make beautiful products, nor is it about simply uncomfortable or unusable objects. The cold mechanism which lies behind these silent agents – be it nets or benches – might play well on removing the problem from this particular place, but it will always fail in solving the root of the problem. However, removing rough sleepers from residential areas or loitering teenagers from shopping malls does make these spaces easier to manage and more expensive to rent. Unpleasant design thus successfully excludes certain social groups and restricts certain uses of objects, while raising value of a product or its surroundings.

Unpleasant design operates in public space, and by being placed there it creates social friction. It is different if one places an obstacle to skaters in their private garden from when this is done in a public park. In public space, there is an expectation of accessibility. There is an expectation that accessibility is granted to everyone, or that it can be negotiated. To the contrary, with unpleasant design approach, firm, immutable objects stand for a policy or a rule and ensure it is applied without debate.

Most of us probably would not advocate making public space comfortable for sleeping, such as to encourage homeless people to spend the night (or day) on benches in parks and squares we frequent. We could probably agree that we don’t wish for anyone to be homeless in the first place. However, the question is what can be done with the problem of homelessness that is already there? Should design prevent it, should it accommodate it, or should it simply ignore it?

Accessible Space for Public Life

The struggle over the accessibility and right to public space can be traced back to the origin of cities. In ancient Greek times, in order to have access to the public sphere, one needed to be a free citizen and thus possess a property of his own. In Middle Age Europe, the market and the church were the centre of public life. Sociologist and philosopher of the public sphere, Jürgen Habermas, defined public space as that where there public life takes place[2]. He observed that in Renaissance, with the rise of bourgeois society, came the expectation of accessible public space, where individuals would freely exchange opinions and knowledge.

Conversely, a new kind of space emerged in recent years: architect Miodrag Mitrašinović in his book about Total Landscapes theorized it as PROPASt[3]. Publicly accessible, Privately Owned Spaces such as theme parks and shopping malls constitute a strange mix of accessibility and publicness. Social interactions here are the same as in the public space – people meet, exchange opinions. However, these spaces are maintained, managed and controlled by private entities. This kind of arrangement is increasingly infiltrating urban areas as well – in the form of Business Improvement Districts or BIDs. Essentially open-air shopping malls, the traditional form of a shopping street is put here to the management of private entities. BIDs are notorious for the transfer of authority from politically accountable public officials to unaccountable private actors, for putting public space in private hands. Unpleasant design is often used to maintain the etiquette prescribed by the companies – cold, uncomfortable seating, strong lights, video surveillance with facial recognition, etc.

Digital Unpleasant Design

Rules and regulations in shopping malls are comparable to terms and policies of numerous online services that are offered for free. Rules here are often more explicit – and the documents describing them much more complex – but are similarly restrictive. Everything is allowed, as long as it brings some profit to the company and does not interfere with the company’s control over data and algorithms. Thus, this important channel for a publicly accessible exchange of information and opinions – just as public space was for Habermas – is entirely in the hands of private tech companies. Can there be such a thing as digital unpleasant design?

Uncomfortable Websites and Biased Algorithms

A pattern, similar to the unpleasant design language, can be observed in the persuasive techniques many websites use today, in order to influence visitors’ choices or attract visibility. At the same time, decisions on all kinds of levels are increasingly made by algorithms, which have latent bias towards certain social groups, lifestyles, nationalities, etc – comparable to the way unpleasant design is used to exclude certain groups and restrict uses of space.

Airline companies often take advantage of the ticket purchase process to offer some additional services. There are car hire and hotel bookings, travel insurance and upgrade offers. The customer often faces suggestions that twist the meaning of their actions – such as to say that they don’t want to save money when in fact they don’t want to pay a membership fee. Unsubscription from newsletters is always easy to reverse. Rather than simply checking in, the website strongly suggests upgrading to Business class. Rather than simply buying the item you were looking for, a website suggests adding a “free” item, for which they charge the shipping costs. These techniques in web design are sometimes called dark pattern – human-centred design driven by marketing. It is essentially a collection of experiences where the overall pattern is to mislead, misguide people into purchasing or otherwise agreeing with something that was beyond their intentions[4].

Nevertheless, there is no publicness in online shops, airline ticket purchases and subscriptions to newsletters. Those are all outlets of commercial services which do not, by themselves, constitute “a right” on the internet. Connecting with people however, is slightly different. While social networks such as Facebook promote their free service as a way to get in touch and facilitate exchange of memories and opinions, they take an appearance of something belonging to the public sphere. This however, is far from reality as these platforms are owned by private entities. They are built on proprietary code and served through infrastructures which are entirely privately managed. Significantly, the use of these services is regulated by long and complicated Terms of Service agreements. To use a free service, thus, one is requested to agree with a long list of statements, which we most often don’t even read.

Staying with the insistence on publicness for the purpose of identifying digital unpleasant design, there is one instance where these three spheres, websites, infrastructure and public space, meet. Publicly accessible WiFi is often available in public space. Similarly to the ancient Greek model, the access to this augmented agora is granted to users with a password. The majority of open wireless networks still found in public space today are provided by businesses such as restaurants, banks, shops, etc. All these networks require some kind of authentication, and acceptance of the Terms of Service agreements. Surfing the internet is free, in exchange for some personal information (e.g. a phone number) or simply an act of agreement. If we examine the rules set up here by network providers we will find different measures that can be taken to restrict unwanted behaviour and use of traffic – such as the reduction of bandwidth, or blocking certain users. Access to free WiFi is thus explicitly regulated and managed by the private entities which are providing it, similar to what we have seen happens in BIDs or theme parks[5]. The problem with limiting access to a wireless network is very similar to limiting the use of a public bench – if this network is propagating in public space, and is open to public, then the proprietary design of rules comes very close to our definition of unpleasant design: successfully excluding certain behaviours and restricting certain use of objects, while raising value of, in this case, service.

Next to free WiFi, we often encounter video surveillance in public space. The cameras, individual or closed circuited land a view on streets, corners, public as well as private spaces. This view can be monitored live, from the darkness of CCTV control rooms. In traditional surveillance rooms people are sitting in front of a number of screens, observing anything unusual and reacting to alerts about suspicious activity. With the advance in computer vision techniques, this task is increasingly transferred to algorithms. Silent agents are thus programmed into the software which alerts of any suspicious behaviour. If a person wearing a black hoodie might go unnoticed or might still not alarm the human observers, well trained software will not miss its appearance and will alert on every one. Using facial-recognition, CCTV cameras can capture human faces in the image. Police systems use facial recognition to compare, in real-time, images of faces on the street against “hot lists” of people suspected of gang activity or having an open arrest warrant[6]. One big problem with computer-vision based surveillance is its undeclared bias. The public has no way of knowing what is programmed into the software and it is quite hard to check. And there is lots of evidence of bias in algorithms, in what they recognise as human faces and how similar these faces are to other faces. Racial bias was, in fact, present in analogic photographic techniques already. Colour film was apparently built for white people and showed really poor results when pictures of non-white people were developed. This was the most obvious in “mixed” situations.

Facial recognition software uses machine learning on sets of images – for example images of different faces to recognize a face. It is given input such as “this is a face” and “this is not a face”. If the set is not diverse enough, everything that deviates too much from the established norm will be harder to detect. This can be due to a person’s facial features, or simply the colour of someone’s skin. There are many examples of cameras with software that failed to recognize black people’s faces[7]. Another typical case is the perception of a blink in digital cameras, which mistake Asian eyes for a Caucasian blink. A researcher at MIT media lab, Joy Boulamwini noted how quickly this bias travels – as quickly as it takes to download generic facial recognition software libraries (openCV) from the internet. The bias which is engendered in this code will propagate through all applications that are using it. Training sets too are often the problem. They are often simply large collections of something, and not an objective representation of diversity – they rarely represent population in fair ratios.

Manifestations of algorithmic bias are abundant, and they do not only concern computer vision. Algorithms also inform decisions about whether or not someone will get a loan, go to prison, or whether parents can take care of a child[8]. But instead of extending this discussion in that direction, we should come back to the core question about unpleasant design, whether or not it is possible in the digital realm.

Digital Citizenship and The Impossibility of Unpleasantness

Design theorists and philosophers – such as Herbert Simon, Vilem Flusser, Tony Fry – recognised that the we cannot understand ourselves without understanding the things we create. Thus, the artefacts and algorithms we design offer themselves for an interpretation of the society. Identification of dark patterns in web design and algorithmic biases in facial recognition is an attempt at meaningfully delineating these areas of interest. We have seen a few places where algorithms converge with public space or public sphere – communication, as in social networks; surveillance, as in facial recognition; infrastructure, as in public WiFi. One important difference between physical and digital design of intentional unpleasantness is that anti-homeless studs, inclined garbage bin tops – all these agents operate on a sensual level in public space. In digital sphere, they work on a cognitive level, while circulating in a realm that is at the same time virtual and privately owned. To conclude, here are a few remarks on the reasons we should not consider these to be part of the unpleasant design phenomenon.

Among the numerous examples of persuasive and ugly websites, biased algorithms and inaccessible infrastructures, only some can be considered potentially unpleasant. But what, if anything, is public in the digital sphere? Digital interactions take place in an exclusively private space of network infrastructures. They rely on communication services such as Twitter, Facebook, Google’s Gmail, Microsoft’s Skype, etc. which are offered by commercial companies, free of charge but operating in a proprietary algorithmic space. Yet, these technical stacks of privately-owned layers of infrastructures increasingly become our primary outlets of expression. Conversations are mediated by communication channels that encode and transmit messages relying on code and protocols. Some of those conversations are entirely private, while others, like political campaigns and coordination of protests, address the public sphere. Can it be that the notion of public sphere is changing with these new communication technologies?

Continuous and careful redesign of Twitter and Facebook feed algorithms confirm the importance given to the attention of their users. In 2016, Twitter introduced its new algorithm for managing the feed based on relevance rather than chronology. Facebook’s news feed algorithm is also attuned at our attention which the company claims to be able to satisfy with relevant content, relativizing in this way important issues such as fake news or offensive content – deemed more relevant to some people than others. We have to challenge this kind of filtering when it comes to the political outcomes or the way algorithms are introduced into human knowledge. Still, the debate around these issues usually ends at recognizing Facebook’s responsibility for the content served through their platform, precisely because it is a private “space”. And because this space is governed by algorithms that are more than rational – they are super-rational, always accelerating measurements and optimisation of circulating content – and thus hard to challenge as decision-makers.

Before we could start talking about the expectation of accessibility or fairness on these platforms, we need to think about articulating a lawfulness of a digital citizenship. This is not a task of simply “calling” something that name, of changing a few words in the way we talk about rights on the internet or in public space. It is about becoming more familiar with these structures and inventing ways to address, as a society, phenomena that are purely linked to the digital, situated in communication infrastructures.


[1]     Unpleasant here refers to a specific phenomenon which the author calls “unpleasant design”, and on which the author co-edited a book under the same title Gordan Savičić and Selena Savić, eds., Unpleasant Design (Belgrade, 2013).

[2]     Jürgen Habermas, The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society (Cambridge, Mass.: MIT Press, 1989).

[3]     To read more on his take on the urban processes of privatization of public resources, see Miodrag Mitrašinović, Total Landscape, Theme Parks, Public Space (Aldershot, England ; Burlington, VT: Ashgate, 2006).

[4]     For a collection of these practices, see https://darkpatterns.org/

[5]     There are ways to circumvent some of these restrictions. To combat time limitations imposed by certain free network providers – 30 minutes to an hour maximum at many airports and transportation hubs – artist Kyle McDonald offers a simple piece of Python code that helps change the MAC address (also known as MAC-spoofing) of your networked device in an automated manner https://medium.freecodecamp.org/free-wifi-on-public-networks-daf716cebc80

[6]     Facial recognition software can also be tricked – see for example the work CCTV Dazzle by artist Adam Harvey

[7]     For an overview, see for example this Guardian article on facial recognition and racial bias: https://www.theguardian.com/technology/2016/apr/08/facial-recognition-technology-racial-bias-police

[8]     For a good overview algorithmic bias, see the book by Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, First edition (New York: Crown, 2016).