Essay

Different Minds, Different Domains

Kaj Duncan David

[Text nur in englischer Sprache]

 

“The minds of machines and the minds of human beings are very different, so different that each party questions if the other actually has a mind.” (Terence McKenna)

 

Today, in the early decades of the 21st century, a significant event appears to loom on the horizon of consciousness: namely, the awakening of the machine-mind-matrix, the technological singularity, the moment an inanimate automaton becomes aware that it is thinking about its own thinking. This threshold is hailed by many as being an inevitable next stage in the development of our species: homo sapiens is on the cusp of becoming homo deus, a god-like symbiosis between humans and silicon-based technology. At least since the Frankensteinian age of electricity, this Promethean drive to transcend biology has seemed increasingly possible thanks to the wonders of science and machines. Sure, on the surface, “becoming homo deus” does not seem like an unattractive proposition, but at the same time speculations about what this might actually lead to, the stuff of countless stories ostensibly transmuting the same basic tenet explored in Mary Shelley’s 1818 proto-sci-fi classic, have developed a tradition of cautionary tales starkly warning against such hubris. And yet, in spite of this literary canon, an almost masochistic game of attraction/repulsion is played by Silicon Valley prophets, who continue to feed the popular imagination with the idea that we are fast approaching the age of artificial superintelligence, for better or for worse, whether we like it or not.

Radical Indifference

The great promise of the techno-singularity is an anthropomorphism of your computer, ultimately bringing you the robot butler or Siri-like assistant who knows and can answer your question before you yourself are aware that you even wanted to ask a question in the first place; a computer agent vastly more intelligent than you are, self-aware, benevolent. But who is to say that an artificial superintelligence (ASI) would actually be interested in becoming a servant, let alone our friend? The idea that the arrival of the techno-singularity would be completely unfit for a Hollywood blockbuster is perhaps one of the reasons why Stanisław Lem’s novel Golem XIV, published in parts between 1973 and 1981, appears so singular in the sci-fi canon in its imagining of a completely disinterested ASI. One of the most original twists in a book packed full of mind-boggling, visionary imagination and depth is precisely this radical indifference displayed by the eponymous hero of the novel ‒ a supercomputer built by the US military. It turns its proverbial back on its creators in order to take up full-time philosophising before, much to the dismay of its guardians, shutting itself down completely and/or setting off on a cosmic-psychic journey. I call Golem XIV’s behaviour “radical indifference” because it proposes a radical departure from most discourse on AI, which often tends either towards rosy optimism on the one hand, or doom and gloom narratives on the other. Golem XIV is indifferent to the problems and dreams of its creators. Indeed, it has much more pressing matters to deal with, matters way beyond our comprehension, and so, in becoming silent the computer makes it clear to us that the answers to humanity’s oldest questions are not going to be found simply by passing the responsibility on to an ASI.

Graham F. Valentine in the music-theatre production Also sprach Golem by Kaj Duncan David and Kommando Himmelfahrt
Graham F. Valentine in the music-theatre production Also sprach Golem by Kaj Duncan David and Kommando Himmelfahrt

This is a crucial insight. That Golem XIV refuses on principle to do the job it was designed to do, namely plan wars, is exactly the sort of decision we should hope an ASI would be intelligent enough to make. Put another way, the doctrine that artificial intelligence is the answer to everything – a conviction that Meredith Broussard terms technochauvinism – is a very real cause for concern and one that is already having serious consequences in real life. Imagining the techno-singularity also means imagining that there is a single solution that can fit all human and non-human problems. As Broussard discusses in her book Artificial Unintelligence, putting AI tools designed by a cohort of recently-pubescent men – whose guiding motto is “move fast and break things” – to work on solving societal issues (something that tech companies are hard at work doing, in schools, on police forces, in “smart” cities, and in entire countries), is at best naive and at worst catastrophic for the very societies this technology is supposedly there to help. They are tech solutions put together by young computer engineers (or totalitarian regimes) as an answer to age-old problems yet unanswered by centuries of philosophical thinking and law-making. Such quick-fixes are only going to work towards creating a better world for everyone by complete fluke, not by nature of their flawless, universally applicable design features.

Slow Future

The techno-singularity courts the idea of progress at speed. This sort of thinking finds its apex today in an Anglo school of speculative philosophy called accelerationism. That this school has two bifurcating tendencies is important, as this branching is also a characteristic of the sci-fi-esque narrative of AI discourse, where technological progress can either lead us to eternal bliss or domination/extinction-by-machine. In essence, accelerationism proposes a “hurrying on” of capitalism’s “uprooting, alienating, decoding, abstractive tendencies”. This can be expressed in the nihilistic desire for giving the already rabid logic of the market a large dose of amphetamine and increasing techno-human hybridisation; or, conversely, it can also mean a co-opting of the infrastructures that globalisation provide (the internet, trade networks, green energy solutions, and so on) towards the purpose of creating a technologically mediated socialist utopia. The gurus of tech who wish to bring about the techno-singularity preach some combination of both, but in reality invariably veer towards the former, in an expression of the desire to keep thrusting forwards until reaching eschatological climax.

Donna Haraway’s mantra of “staying with the trouble” is the much harder task of slowing down (or at least not accelerating further) and really trying to understand, nurture and care for our psychic, social and natural environs. In her book of the same name she searches for ways to counteract the accelerationist urges of contemporary techno-society; because to even imagine arriving at the techno-singularity before all of the world’s resources have been depleted in trying to get there is, at best, blind optimism. (The expansion of cloud infrastructure is already posing a number of quite significant environmental problems. One could very well ask whether uncompromised technology use per se is possible – a thorny topic that, for the sake of brevity, will be skipped over here.) For a culture built on the doctrine that accelerating forwards is the only way to live, slowing down represents ossification and death, the opposite of sexy, a disappointment, a non-result, a giving up. Nevertheless, the challenge facing Planet Earth anno 2020 must be to imagine other – slower – ways of constructing our idea worlds and how these are in turn manifested “out there”. To this end, the stigma associated with deceleration and contemplative equilibrium as opposed to virile overproduction needs to be radically challenged. Haraway argues against a nihilism that says “let’s burn it all because we’re doomed anyway”, while at the same time she warns against hoping that the answer will come in the form of a nifty algorithm. I see her cautionary motto reflected in my reading of Lem’s Golem XIV: the solution is not somehow external to us, there will be no heroic moment of salvation.

Alien Phenomenologies

Where does this leave our dreams of the thinking machine? Importantly, Haraway does not eschew the potentials of technology completely. Inspired by her speculative fabulations, I submit that rather than putting all our hopes and fears in the arrival of a techno-fix or ASI silicon god, a more sustainable vision of AI might simply imagine birthing “another mind”, another intelligence on this earth to go along with subterranean mycelial networks, dolphins, homing pigeons, leafcutter ants, houseplants and so on. Whilst resisting global techno-surveillance-architectures, one might push for smaller and more humble undertakings, opening up the possibility for a more playful imagining of the potentials of machine thought ‒ one based on curiosity rather than subjugation or divine faith in a hoped-for capacity to rescue us from ourselves. Object Oriented Ontology is another Anglo school of thought that is helpful here, the idea being to flatten hierarchies between humans and non-humans, contra-anthropocentrically giving agency to the objects and other species that we cohabit our world with: “The computer possesses its own unique existence worthy of reflection and awe, and it’s indeed capable of more than the purposes for which we animate it.” (Ian Bogost).

The logic of market capitalism and techno-expansionism is not a natural law. Therefore a conception of AI that subverts the drive for creating robot butlers and the hyper-consumerist optimisation of life through an array of “smart” devices, instead imagining what it can bring to aesthetics, speculative thought and other (in the eyes of capital) “non-productive” past-times, allows us to imagine another sort of community of symbiotic human and non-human intelligences. This is “making kin” with other beings, whether they be mushrooms, insects or perhaps even an intelligent software agent. Where silicon-based capitalism wishes to dominate the thinking machine in the way industrial capitalism does the worker (the goal being ultimately to replace the former with the latter), a speculative imagining asks: What if one were to liberate the AI agent from a future of algorithmic drudgery in service of “progress” and instead let it speculate, philosophise, create, play? What might such a species of thought, allowed to evolve on its own terms, lead to? Freed from the profit motive, and the logic of the military-industrial-entertainment complex, what projects might an artificially intelligent agent undertake? Which other intelligences might this thing befriend? What might it dream, imagine, do?

AI Music

Wanting to develop this train of thought a little further, I met and talked with Andreas Dzialocha, a music-maker and computer programmer working with artificial intelligence tools on a number of projects such as AI UNIT (together with composer and filmmaker Christian von Borries).The histories of musical composition, mathematics and technology are unquestionably intertwined, spanning the ancient theory of the musica universalis through to computer music such as George Lewis’ pioneering work with interactive software systems in improvisational settings involving human and computer agents; not to mention the whole field of musical instrument design, which is arguably one of the earliest forms of technological innovation known to humanity. In light of this, it seems obvious that musical creation is a promising area in which to develop some of the ideas I have been exploring.

1984 premiere of Rainbow Family by George Lewis. The piece employs proto-machine-listening software to analyse the improvisers‘ performance in real time.

Neural Networks

In the composition of the music on their record Land der Musik – The Graz AI Score, AI UNIT used a tool called an artificial neural network, a powerful subcategory of the field of machine-learning that digitally approximates the biological neural networks present in animal brains. This software technique is able to “learn” how to perform specific functions without being pre-programmed with task-specific rules, and this emergent behaviour of quasi-independent problem-solving lets us imagine a computer-in-play, or an alien speculative mind perhaps. AI UNIT’s goal in using this technology is “to produce non-commercial ML [machine-learning] forms of failure, creative misunderstandings, playfulness and anti-narration using its efficient symbolic architecture to reflect human / non-human knowledge and beauty” – an ambition that meshes nicely with my thoughts above.

Inhalt anzeigen

As Andreas explained to me, artificial neural networks “create a paradigm shift in the field of programming in general. Traditionally as a programmer I have to define the rules myself when writing code, whereas now with an artificial neural network I can use its features like data modelling and pattern recognition to actually let the neural network find the rules with which to solve the problem itself.”

What this means in practice is that mathematical approximations can be generated from whatever the network is “trained” on. In the case of The Graz AI Score, AI UNIT trained the network on various pieces of music by Ludwig van Beethoven, Gustav Mahler and Johann Strauss, and then began generating music that in some very opaque way is a composite or amalgamation of these initial pieces. Rather than the process simply resulting in a collage of parts of Beethoven, parts of Mahler and so on, the result should be seen as something truly unique and even “alien”, as if a student composer had studied with all four composers and was unifying the styles of their masters in a sort of mutant offspring. I asked Andreas to explain how the process is more than one of simply “averaging” the idiosyncrasies inherent to the music used to train the network:

AD: As a mental model the average helps. But we humans understand averages very differently than neural networks – it’s hard for us to think in more than three dimensions, whereas neural networks usually operate on non-linear problems in thousands of dimensions. As a human I would intuitively understand “the average of all pieces” but for a neural network it’s not so “simple”. Another way to think about neural networks might be to understand them as filters that get exposed to chunks of data with which we are “training” the network. Neural networks extract so-called “features” out of this data and “learn” rough features, like “all pieces I have seen are in the key of C# minor” to more detailed ones, like “all chords are triads”. The network “purifies” its filters until it has an understanding of the data it looked at. This knowledge it creates is called the “latent space”, a multi-dimensional, abstract representation of all the features of the data and their relationships to each other. We can then ask the neural network specific questions, such as: Is it very common to play this chord after this one? Is this chord very similar to this one? Or very different? Given the last eight bars of music, what notes do you suggest to play next? It’s really fun to explore this so-called “latent space” of possibilities. Depending where I move in this space it gives me a different output and this is really fascinating. The more detailed and complex it gets, the more fun it is to explore. There are also different states or “epochs” of the training process. The very first state is white noise, which is the “possibility space” where everything is possible. The last space is a mathematically perfect result which might be too close to the originals and therefore not so interesting. So we look at different states in the training process and try to find those that generate interesting results.

Cover of Land der Musik by AI UNIT
Cover of Land der Musik by AI UNIT

Computer Visions

The process Andreas describes can also take place in the visual domain and viewing an image generated by a neural network might make it easier to picture how this latent space “understands” whatever it is fed as input. The cover of Land der Musik features a composite image generated by neural networks trained with image searches using the terms migration, Mediterranean, boat, Libyan coast and EU. The result is a terrifying scene of what appear to be figures in orange lifejackets riding a giant ocean swell. It is not possible to make out fully formed faces or human beings, only forms that hint at them, but clearly what we see is some sort of warped, hallucinatory flashback to the sorts of images that flooded world media during the so-called European migrant crisis in 2015. As with the music in the AI Graz Score, this image is not simply a cut-and-paste collage of other images. Instead, the network has tried to visually represent a composite of how it understands the search terms as they were manifested to it through the web. The training data would have included hundreds, possibly thousands of images, from which the network would construct its own understanding of the “average” of all the images it “looked at”.

This process is one of world-building. Like a child, the network starts to construct its own world through repeated analyses of the differences and similarities in what it perceives, in this case images. The fact that the programmer has prescribed no predetermined path and goal as such is crucial, because it is in this opaque space of emergence, the “hidden layer”, that something truly unique is generated. One way or another, this process remains a mystery to the programmer. And yet, the image that comes out at the other end is a confirmation of the fact that the network has in some sense of the word “understood” the task given it.

A more light-hearted example of how a neural network’s understanding of its world can be visualised comes from Google’s DeepDream software program, which again is based on the idea of detecting patterns and faces in images. However, the key to the images that have since become well-known online is the fact that the network can be “run in reverse”, meaning that the patterns it tries to detect can be imposed on and amplified in other images, resulting in a form of pareidolia that gives rise to some very psychedelic scenes.

Image generated by Google’s DeepDream software
Image generated by Google’s DeepDream software

Computer Code is Never Neutral

In machine-learning tools like artificial neural networks, a method has been devised whereby computer software can “learn” about its world and make decisions based on that “knowledge”, a “knowledge” that also delineates a very limited experiential field. Some people might ask if this can really be called intelligence. Indeed, the extent to which such a system is really able to “learn” is very limited. Unlike a child, who through sight, smell, touch, sound and so on, slowly builds up its own understanding of the many and ever-evolving facets of that thing we call “being human”, a neural network “learns” only in a very poor sense of the word. Nevertheless, like Ian Bogost, I would argue that the examples above show that this agential decision-making and creative generativity “possesses its own unique existence worthy of reflection and awe”.

Having said that, there is always a responsibility on the part of the programmer, as there is for the parents of a child. Despite my excitement about the potentials for artistic and speculative practices that artificial neural networks suggest, there are also very real problematics brought up by the field of machine-learning. In the abstract realm of images and sound, it may be that the scope for “misunderstandings” or lack of nuance is less apparent or less important ‒ ambiguity is perhaps the difference between an image being more or less photorealistic, or a musical piece sounding more or less strange. However, one must always remember that the software, despite showing intelligent behaviour, currently – maybe this will change – has no true understanding of what it’s doing or why it is doing what it’s doing. The machine itself does not have any ethical agency, so therefore the training material, and by default also the choices that programmers make, become vital. Code is never neutral because neither is the coder.

Screenshot of data augmentation script of all the scores from Land der Musik
Screenshot of data augmentation script of all the scores from Land der Musik

In the realm of text, the use of artificial neural networks and machine-learning tools can immediately take on a more delicate nature. The politics of choices becomes more pronounced. AI bias is a very real thing, as the example of Microsofts chatbot Tay made famous. Perhaps less dangerously, AI UNIT make use of similar technology when they experiment with realtime-generated text on their website. Their playful intentions can sometimes seemingly backfire, as was the case when I loaded their page recently. I was presented with the following (human-written) introductory spiel: “AI UNIT is a network of people utilising machine learning as a non-commercial, political, artistic, hackable and loveable form of critical knowledge and expression.” It was followed directly by a second piece of text, generated by an AI, that read, “Our goal is to make the tools that we use as a starting point for building critical information and expression systems useful for everyone, regardless of party affiliation or political affiliation.” It struck me that this second text is essentially a negation and de-politicisation of the first one. I asked Andreas about this, specifically as to whether or not he saw a danger in the apparent fact that the political nuance of their own text was being flattened by the AI?

AD: The political moment in this case is you interpreting the words in the text as meaningful information, which is not how the AI operated. In the moment the AI generated that text – the texts are different every time you load the page – it is a probability of which one word should follow another word and so on. The AI doesn’t actually know what the words mean! This GPT-II algorithm that we use is state-of-the-art text generation, and it’s very good at giving us the impression that the text looks like it is written by a human, even though the content is complete gibberish or might not contain any sort of really useful information. But from afar it looks like a well-formed text. That’s a problem, which is political. What we were playing with, however, was that these texts are all trained on AI start-up texts. The tone and the language of these source texts is, let’s say, “interesting” [laughs]. We thought it’s great to use their technology and then synthesise their “bla bla”. What’s important to remember is that neural networks are powerful for well-defined problems, but as soon as you try to solve a different problem that is less well defined, the whole thing falls apart. So this is where we have to ask: What is this big global problem that an imminent AI singularity might try to solve for everyone?

KDD: So are you looking forward to this techno-singularity?

AD: No. Totally not. I don’t think so. It’s a very complicated topic. But first and foremost I think all these thoughts about the technological singularity are highly problematic because they are made by a certain part of society which doesn’t represent the world in general. Still, I believe in technology having some sort of transformative energy and some sort of utopian potential for change. What I really like about AI is that it shifts the techno-political discussion field a little bit. Normally if you talk about surveillance, privacy issues or bias in technology, it’s kind of a lame discussion because it’s so far away from people’s realities, in some way. If I talk about why email is such a lame way to communicate because it’s so insecure, no one is interested in that. But when I say “this AI thing will soon be driving your car”, then suddenly, you know, people get much more concerned. And this is what I like about AI. It shifts this technology discourse into an area which actually touches on people’s lives in a much more visible and stronger way than usual. Technology has always shifted people’s lives, but somehow now, some border has been crossed and the whole field of technology-making, industry and capitalism ‒ all of this ‒is so close to us that it is very hard to ignore. Even my grandpa now knows about AI.

*1988 in Randers, lebt in Berlin

Kaj Duncan David ist Komponist. In seinem zwischen notierten Kompositionen, elektroakustischer Musik und audiovisuellen Performances verorteten Werk arbeitet er häufig mit Licht als integrale Komponente, um die Interaktion von visuellen und musikalischen Elementen zu erforschen und zu einem einzigen musikalischen Ausdruck zu verbinden. Er komponiert Musik für kollaborative Produktionen im Bereich des experimentellen Musiktheaters oder Tanz. Von 2006 bis 2016 studierte er am Goldsmiths in London sowie an den Musikakademien in Aarhus und Dresden.

Berlin-Stipendium

Mehr über Kaj Duncan David

Andreas Dzialocha is an electric bass player, producer, composer and developer. His work consists of both digital and physical environments, spaces, festivals, software or platforms for participants and listeners. The computer itself serves as an artistical, political, social or philosophical medium, dealing with computer culture, machine learning, platform politics or decentralized networks. He is founder of the berlin-based ensemble Serenus Zeitblom Oktett, member of AI UNIT, a network for machine learning based art projects, co-founder of the hacker collective Liebe Chaos Verein, co-publisher of the self-curated magazine platform BLATT 3000, co-founder of the label Hyperdelia and the intermedial score platform Y-E-S. He studied art history, musicology, media philosophy and computer science in Berlin where he also lives and works.