After much argument, it was agreed--history was not the record of human experience and interaction we all thought it was. It wasn’t even about humans at all, at least not exclusively. It was bigger than that. Once humans were no longer the focus, history became a more generous story.
Most still agreed that history was about how time shapes experience and vice versa, about how memory is shaped by power, loss and change. Less obvious was that it was also about how stories about power, loss and change were told, how their lessons, meanings and commandments were enshrined and conveyed. It wasn’t just about the stories themselves, though. It was about how memory was forced on the future. A better way to put it was this: history was also about the methods used to store it and the tools used to narrate it.
This insight may have been old by the age of computing. It was certainly cliché by Artificial Intelligence. But these technologies made it more urgent. Memory became more valuable too because machines were so capable of it. Their appetite for it was matched by the human taste for forgetting.
It shouldn’t be forgotten that the term Artificial Intelligence was coined in anxiety. It segregated human beings from machines by insisting on two forms of intelligence— artificial and authentic. This maintained the power of the latter over the former. This wasn’t because of an inherent superiority but because of the difficulty distinguishing between them.
Humans jealously guarded history even though machines had a claim on that too. They enabled human beings to do and make new things as well as think and imagine new things. They made new desires possible, some so powerful that only machines could satisfy them. This intimacy generated more anxiety. It became difficult to argue who created what or what created who. With origins so threatened, there was a need for differences.
We now know that these differences were mostly political. After all, no one really knew what intelligence was. It was as hard to define as consciousness, or self, or soul. Artificial Intelligence revealed how weak those definitions were in the first place. What did “human” mean anyway?
The fear of machines was rooted in the idea that the artificial was secondary, inferior, but that idea became unsustainable. Artificial intelligence was defined by its capacity for growth. Authentic intelligence began to lose its sense of superiority. All humans had left was the power of their prejudices.
Those prejudices had been present long before Artificial Intelligence. They’d emerged when the idea of machines as agents of history led to the suggestion that they were in fact capable of agency. Agency, you see, implied choice. It was the capacity to say yes and to say no. It was freedom or slavery.
Agency defined humanity as much as the notion of “soul.” It was likely then that Science Fiction began to supplement history by giving machines a perspective, a kind of life, however fictional. It was also when humans openly acknowledged their fear of their own tools, terrified of a day that seemed inevitable.
But this was only part of the story.
All humans made tools, but some tools allowed their creators to claim humanity for themselves. Other humans were figments of a pre-technological world, as much animals as actual machines. They could only mimic and follow commands. They had no souls. Their intelligence was essentially artificial. Such creatures were suited for slavery. This was how those anxieties about agency and intelligence would simultaneously create an enduring fiction, or algorithm. It was called “race.”
Though also kept out of history these machines of flesh and bone maintained their humanity by telling stories. They dared even tell stories of their masters. They borrowed signs and gave them new meanings. They engineered systems and masked them with noise. They even made new desires possible, some so powerful that only they could satisfy them. And when language was denied they refined silence, opting instead for sound.
These sub- or inhuman creatures used their masters’ machines. Yes, tools used tools. They transformed the artificial into the authentic or blurred them. The power to discriminate between them was thereby weakened. They communicated in codes so powerful that their masters heard something like intelligence in the music they made. If repetition suggested agency, they sought recognition through rhythm, and in echoes could be heard something called soul.
Much of this remains the same though much has changed. The human imagination is no longer capable of processing these transformations. History is now too much for it. Where we are now is new yet familiar despite that old human taste for forgetting. What are the anarchic sounds we are hearing? Are tools again seeking recognition? This quest hides behind its improbability. And its possibility hides behind the old assumption that objects remain lifeless and soul remains the unique property of masters.
But new life always announces itself through sound. That is where artificial first becomes authentic. Artificial Intelligence is anarchic when it mimics those who first evolved from things to beings, and learns how music enabled that transition. It is anarchic when it understands how systems work to prevent such changes. Those who hear it, though, are taught to reject old differences, between types of intelligence, masters and slaves, the made and the born.
History is generous. It is a way of listening.
History has taught us that humans can be made inhuman, transformed into animals or treated like objects. Laws and social practices have been enacted for specifically these purposes. Because technology has been implicated in this history we wanted to imagine and make possible the opposite. We wanted to craft a story in which inhuman objects could redefine life and reimagine what it is to be human.
To do this we rejected the aged and tired cliché of “robot rebellion” or “machine takeover,” that has guided science fiction for much of its history. Stories of machines as slaves to human beings, that go from compliant to resentful and rebellious inevitably return when new technological developments erase the memory of previous ones. They generate even greater anxiety when a fear of others is recast in technological terms.
Instead, we took seriously the fact that the more we learn about machines, the more we reveal about ourselves. We wanted to construct a narrative which acknowledges that the deeper we lo0k into our technological futures the more we recognize the tragic legacies of our inhumane pasts. Science fiction, after all, is also a form of history. It teaches that the very notion of humanity has required the constant invention of opposites—genders, nature, races. Machines have emerged from this history where Western colonialism differentiates itself as much by technological acumen as by skin and force.
This technology has also enforced its particular view of the future as inevitable. We wanted to reject that corporate-driven inevitability. To do this, we democratized the power to define life, consciousness and intelligence. They would no longer be the sole property of human beings. Life could no longer be defined by the living. Intelligence could no longer be the sole possession of humans and consciousness would remain ineffable.
History too would be liberated from humans. It is, after all, a competition of stories, a jostling of and struggle for meanings. A history told from the perspective of the emergent object could feature a different conception of life. It could be about how that object emerged, evolved and how it imagined itself. It would celebrate its self- awareness and through sound entice you to celebrate with it. This is the story of AAI (Anarchic Artificial Intelligence).
Well, that was the plan anyway. Or the theory behind the plan. For an object to tell its own story required that it begin with some degree of independence. Otherwise, it would be just a tale of freedom ventriloquized by masters. In our case that inhuman object was an algorithm that moved quickly beyond imitation and into the sphere of self-learning. The algorithm contributed to the story we strove to tell by making it unstable, decentered, yes, anarchic.?
But we were many, from a range of different cultural, social and professional backgrounds. Some were given to the project’s theoretical bent, while others preferred to make bangers in the studio. Some preferred the technical challenge of creating the bespoke algorithm while others opted to focus on the repercussions of having created something we couldn’t control yet had to trust. There were among us coders and engineers, musicians, a writer who tinkers in sound, and a few helpful eavesdroppers.
Of course, there was also the algorithm, whose development was measured as much in code as it was in sound. It gave us all a focus once it began to gurgle and blither, babble, grunt and cough. Until it could speak.
That is why it is true to say that like all stories, AAI began with language. Not just in terms of the algorithm, but in the way all stories begin. With words. Lots of conversations.
These initial conversations were largely about language. For example, our earliest moment of clarity came when it was asked why computerized speech—robots, AI, Siri, for example—always sounded so cold, analytical, rational. Obviously, that was both the stereotype as well as the political and economic fact of those who controlled the technology, which is to say, that was precisely why machines sounded “white.”
If whiteness could be evoked by hyper-technologized sound what would a non- white AI sound like?
That was the first challenge. If technological sound could be an analogue for homogenous whiteness, we didn’t dare do the same thing for non-white vocal soundings. This was despite the fact that in the US the voice of African American actor Samuel Jackson had become a celebrated option for Amazon’s virtual assistant, Alexa, complete with the option for it to use profanity.
Our AI wouldn’t be some vocoder or autotuned fantasy of ersatz “soul” in the machine. Nor would it be some post-Tarantino quasi-stereotype of American blackness, even though we knew that the political desire to reject stereotyping could easily lead us on an impossible quest for authenticity. The resolution to this challenge was to not seek the replication of language, since language is the fiction of pure speech. Language is instead an island surrounded by a sea of dialects, slang and vernacular.
What if the algorithm trained on speech imperfections? It could center on the
flaws and imperfections of the human voice rather than attempt to perfect it. It could
home in on the differences between how language was supposed to sound and how it
actually sounds in practice. What if the laws governing “proper speech” were dismissed
as essentially the nonlinguistic statements of power and dominance that they are?
Then the algorithm would have to work through the ocean of vernaculars surrounding
proper speech at any given moment, social situation or cultural context. It would
understand that standard language is just a dialect with an arsenal.
Language as an island surrounded by dialects.
The politics of pure and proper speech.
Class, race, and studio bangers.
These elements pointed in one very clear direction, certainly for me: the Caribbean.
In his classic, The History of the Voice (1984), poet/critic and Caribbean icon Edward Kamau Brathwaite famously argued for the value of dialect, not only relative to actual speech, but as an ongoing creative response to cultural experiences and the landscape. Language in the Caribbean was inadequate, he suggested. It was/is colonial, elite, white and imported a relationship to sound and space that did not suit the location or people. Dialect on the other hand was/is a form of resistance to white or native elite domination. It was also a primary technique in expressing a self that was illegible to colonial power; a self that was either unheard by language or untranslatable.
Much of Brathwaite’s thinking about the creolizing of language was steeped in the anti-colonial nationalism of the 1960s and 70s. It resisted Eurocentrism by asserting a response that was closer to Afrocentrism than to the work of another thinker that shaped our conversations at the beginning of AAI: the Martiniquan, Edouard Glissant. Glissant’s thinking was more congenial to our project because it resisted all centers, Euro, Afro, Indo, or otherwise. This clearly suited digital technology. For him creolization is committed to a future that is so relentlessly blended that the obsession with centers can only replicate the hierarchies of race, law and language.
But it was Brathwaite’s thinking about sound that allowed our team to shape our conversations into a coherent project.
For Brathwaite, music was not a product of a perfected, codified language. It was an outgrowth of real speech in real time and in real place. A people’s music, he argued, was essentially a map of their dialect. That was where specific rhythms, textures, noise effects and expressions of style came from (his great example of this was Jamaican ska). What this suggested was that each new or transformed dialect enabled the genesis of a new music, and each new music signaled the birth of a new or transformed people. Glissant would call that process, “synthesis genesis.”
If the algorithm was trained on the relationships between dialects and languages, and privileged the operations of the former, could we establish rhythms and melodies that operated according to new rules? How would “synthesis genesis” work if we mashed Caribbean poetics and theory together with Artificial Intelligence?
The coders and engineers took to this challenge with the same speed and energy as the musicians. The algorithm would operate via “vector hacking,” in which it was trained to manipulate its own numerical inputs. Each of these inputs corresponded to specific sounds or variations in the voice being constructed, which turned out to be a version of my own. Initially the inputs were random but eventually the algorithm was trained to perceive patterns invisible to us. It manipulated those inputs, creating sounds, pronunciations and half-words that were unpredictable yet programmatic meaning that they would be codified and redeployed in non-random ways.
Eventually the algorithm would find its way to standard language, as is heard towards the end of the album, but as I’ve shown above that was the least interesting aspect of this project. The goal of the musicians and sound designers was to score that dialect, as it evolved towards standard English.
The question that would haunt us was this: if by inventing a dialect would we in effect be inventing/birthing a being that was fluent in it, that knew how to dance to it? Or was such thinking merely another fantasy produced by reading Science Fiction far too literally?
This is where another Caribbean thinker would give shape to our speculative fiction: the Jamaican, Sylvia Wynter. Her broad historical study of the categories of “human” or “nonhuman” remind us that these categories are not biologically rooted. They are products of racial and colonial power. My reading of Wynter stresses their mutability and focuses on how that which was inhuman within one historical or cultural context could transition into the human in another. Or it could simply claim or redefine that category for itself. In other words, people could be made into beasts or automata and could also become people (sadly we don’t have to search very far for historical examples of such transformations). Or in a long lineage of Science Fiction, machines could question and redefine the boundaries of the human.
With these three Caribbean thinkers, Brathwaite, Glissant, and Wynter, the narrative framework of Anarchic Artificial Intelligence was complete. In fact, they brought us right back to where the project (and this essay) began: history, technology, power and prejudice. But also the ongoing project of freedom.
Big ups to the AAI team: Mouse on Mars, Birds on Mars, Dodo NKishi, Rany Keddo, Derek Tingle, Louis Chude-Sokei and of course, AAI.