Mike Tyka is an artist and researcher with a PhD in Biophysics. After his scientific studies, in 2009 he became involved in the project Groovik’s Cube, a functional, multi-player Rubik’s cube installed in Reno, Seattle and New York. Since then his artistic work has focused both on traditional sculpture and modern technology, such as 3D printing and artificial neural networks.

His sculptures of protein molecules use cast glass and bronze and are based on the exact molecular coordinates of each respective biomolecule. They explore the hidden beauty of these amazing nanomachines and have been shown around the world from Seattle to Japan. Tyka also works with artificial neural networks as an artistic medium and tool. In 2015 created some of the first large-scale artworks using Iterative DeepDream and co-founded the Artists and Machine Intelligence program at Google.

In 2016 he was an invited speaker at UC Berkley Center for New Media, Google Cultural Institute Summit in Paris, Alt-Ai Conference, New York, Digi.Logue, Istanbul, Google SPAN Tokyo, Magenta Conference, Research at Google conference. His latest generative series Portraits of Imaginary People has been shown at ARS Electronica in Linz, Christie’s in New York and at the New Museum in Karuizawa (Japan).

His kinetic, AI-driven sculpture Us and Them was featured at the 2018 Mediacity Biennale at the Seoul Museum of Art and is currently on show at the Mori Art Museum in Tokyo as part of the exhibition Future and the Arts: AI, Robotics, Cities, Life – How Humanity Will Live Tomorrow.

Teresa Ruffino: During your academic studies was there a moment when you realized about the potential of biochemistry, biotechnology and biophysics to be exploited in a creative way? Or was this something that you developed later on in your career?

Mike Tyka: I definitely first started appreciating and thinking about the inherent beauty of biological molecules when I started working with them scientifically, but the idea to actually create physical art out of them did not come until later when (somewhat by accident) I became involved in an art installation project (Groovik’s Cube) and found that I became interested in creating physical art.
Since I was already steeped in the visuals of biomolecules, I guess that’s what first struck my inspiration and thus for the first few years of my art career I started creating sculptures of such molecules in various media, mostly bronze and glass. I found it interesting to think about the fact that the choice of representation is truly arbitrary – these molecules are smaller than the wavelength of visible light and thus the question “But what do they *really* look like ?” isn’t meaningful.
Every depiction is simply a representation of a particular aspect or property of our choice. In this way my copper spiral pieces are depictions of the winding of the protein chain and its fold, while the glass ones are representations of the volume the molecule takes up.

Teresa Ruffino: Is there a difference in your mindset, or in the mental process leading to the creation of an artwork when you are working on one of your Molecular Sculptures or works like Harvesting the Sap or Dopamine compared to the works that involve neural network technologies and algorithms? Do you have different questions and issues in mind, or do you feel that these different suggestions work together?

Mike Tyka: Yes, in my own work I find the process quite different, though I don’t think this has to be true for everyone. When working on physical sculpture I generally have a relatively complete picture in my mind of what the final object should look like. The process is then figuring out how I can make it, what techniques I need to learn to do it etc.
When working on neural or generative work I rarely start with a final picture in my mind. Instead it is much more an exploration of a technology to see how it will behave, how it can be steered or guided. Eventually some idea crystallizes and I pause to create a piece of work. Then I continue tinkering until I find something else, interesting or surprising or pleasing.

Teresa Ruffino: You contributed to the break through research involving the investigation of the way that algorithms “see” when classifying images in the recognized project by Google DeepDream. What were the challenges, in your opinion, in helping with the correct interpretation of these new kind of images, so rich in signs and so easy to misunderstand by the general public?

Mike Tyka: I would say that in practice DeepDream by itself is not typically used to interrogate the function of neural networks. However, the ideas and techniques related to it have been developed further by Alex Mordvintsev and others in ways that allows them to interrogate what single neurons in a trained neural network respond to. However, in general, any visualization that tries to interpret what particular neurons or sets of neurons respond to in a neural network is inherently limited.

That is because the neurons in any particular neural network work together in complex ways which are not possible to fully detangle, as they really operate together on a high dimensional space in which the signals are embedded in. In many ways the challenge to understand the innards of an artificial neural network and explain its decisions or classifications remains unsolved and is an area of active research.

Teresa Ruffino: Can you talk about the process of creating the series AI: DeepDream, featuring these images as artworks?

Mike Tyka: When I first encountered the DeepDeam method, I immediately started exploring its potential to create interesting imagery. Originally the method was simply applied to a photo, essentially overlaying a layer of computational pareidolia. The algorithm would sort of enhance what it already saw in the image, but the underlying image was still the scaffold on which everything else rested.
I was interested in creating something that came entirely from the neural network itself, without the need for a starting photo. I found what worked really well was to simply keep applying the algorithm while also zooming in at each step. Doing this repeatedly creates a kind of fractal, self-similar image of arbitrary resolution (the more you zoom in the more detailed is hallucinated by the algorithm).

It yields quite psychedelic imagery that is curiously compelling to look at. By changing which neural net layers or which neurons are be allowed to be amplified the outcomes can be steered in different directions. Of course, all the patterns generated stem ultimately from what the network was trained in. Those explorations led to the series AI: DeepDream.

Teresa Ruffino: Us and Them is an installation that features different media, currently on display as part of the exhibition Future and the Arts at the Mori Art Museum in Tokyo. This work reflects on the impact of bots and AI in the communication of ideas on social media, particularly focusing on the political implications of the machine-generated thoughts based on those expressed, in this case, on Twitter during the 2016 elections in the United States.

Us and Them is capable of recreating in real life something that the user faces everyday online, the difficulty of ignoring all the possible influences that these target messages have and of distinguishing what is real from what is not in politics and information. Can you explain the idea behind combining Portraits of Imaginary People with the neural-net generated texts? Is there something that you noticed in the reaction of the viewers towards the installation that was interesting for the meaning of the work?

Mike Tyka: Yes, Us and Them grew out of Portraits of Imaginary People and you have summarized the idea behind it exactly. I think it grew simply out of me both thinking about the meaning of the term “imaginary people” against the backdrop of this very technology, and what it would likely be used for, once mature. When I made Portraits in 2017 it was still quite difficult to create high-res images that looked completely realistic, and that uncanny-ness is easy to see in the finished pieces.
I actually really enjoyed that aesthetic and was not trying to make it photorealistic, but I watched the technology continue to improve to the point of near photorealism within just 6 months of me first showing my work. That made me ponder the implications of photorealistic GANs and other generative algorithms which could fool people, especially with what had happened during the 2016 election in the US.

The ability to reach billions of people anywhere mixed with neural nets to create believable content in an automated fashion was quite concerning to me and thus inspired a deeper investigation. Those thoughts and investigations became Us and Them.

Even during the generation of the text content (the tweets) themselves I watched a tremendous progress in the technology during the lifetime of this project. In the first version (installed in Seoul in 2018) I used a technology called LSTM to generate the tweets. The results were somewhat realistic but, in many ways, more of a surrealistic remix of the input training data.

By the following year (the installation on Tokyo) I decided to upgrade the algorithm to use the more recent neural net architecture called “Transformer” to the same training data and using the same hardware. Now the generated tweets read really quite realistic, many of which I, if I was asked to do so, could not distinguish from real ones. That is an incredible, perhaps unprecedented, pace of progress and our regulations, laws, culture and customs have a hard time adapting to the technological possibilities fast enough.

Teresa Ruffino: With the collaboration with Refik Anadol, media artist and lecturer, during his period as a resident artist at Google’s Artist and Machine Intelligence Program you imagined a new kind of environment where the archive is perceived in a completely original way. Can you talk about the goals you had when creating Archive Dreaming?

Mike Tyka: Archive Dreaming was one of the very first art projects that used GANs anywhere. At the time the maximal resolution was still pretty small. Refik had access to this very large dataset of images and documents from the SALT institute in Turkey and wanted to do something about dreaming. We didn’t have a clear idea what we wanted to do, but I experimented with simply training a custom GAN on the archive of images.

I then tried generating images using the trained network and explore its so-called latent space. We really liked the aesthetic of the images we saw. They very much had a feeling of fleeting, incomplete memories passing in a dream. Since the generated images are not exact memories of the training data but rather a sort of general impression, we also felt that it represented an imaginary alternative history where the precise artifacts of time were different, but similar in overall form and feel.
In that way the final projection room features both the actual data, organized as a giant 3D map as well as hallucinated version which represent alternative histories.

EONS from Mike Tyka on Vimeo.

Teresa Ruffino: After your experience as someone that works with algorithms both from a research and artistic point of view, what do you think that is the next step or challenge for this practice and is it something that you are personally going to explore?

Mike Tyka: Yes, I definitely believe that neural net-based generative algorithms as artistic tools are still in their infancy and have yet to really reach their full potential as a means to express artistic thoughts and ideas. I look forward to seeing more people use this medium and explore it and I think we’ll see some exciting and unusual results from that work.

What I’ve mostly seen so far is quite abstract work but what I have seen a lot less of is work that tells a more specific story. Recently I’ve been most interested in pushing that particular boundary and some of my experiments in this direction have become a short video piece called EONS which uses neural net techniques but is really about our relationship with nature and the planet over large timescales.

I used a pre-trained network called BigGAN for this but I manipulated it in certain ways to create a quite particular set of moving pictures that tell a specific story. This is very much ongoing work and I hope to release a few more such experiments of that kind this year.