alt_hall.jpg

I Am An Altar

I Am An Altar

altar.jpg

I Am an Altar is an installation and workshop project developed as part of SPACE studios’ Future Human residency programme.

The installation suggests that Amazon's Alexa voice assistant could speak in tongues. The work features several sound sources: a neural network trained to reproduce the sound of humans speaking in tongues; and the audio produced during a workshop where the public used a neural network to create imitation versions of their voices. The work is triggered when the system is asked a question. The three Alexa units respond with a combination of these sounds. The emerging babble invokes the multiplicity of voices in glossolalia and the gaps in comprehension between humans and AI.

AI is often depicted as a human style intelligence - a chess player or robotic helper. However, this work focusses instead on its radical otherness. It does this by drawing on what happens when humans encounter another form of radical otherness - the divine. Whether humans, machines or Alexa is god or supplicant is left ambiguous.

Workshop

The residency also involved a participatory workshop in which participants used AI to train an imitation version of their own voice. These were used in combination with short form creative writing exercises to get participants’ individual computer-voices to speak.

The voice offers a tool to think through wider issues about our emerging relationships with AIs. Speaking to smartphone or home assistant has become a common means of interaction with AI software and holding a conversation with a synthesised voice is a familiar trope. At the same time, in humans, voice is deeply tied to a physical body, with different mouth, throat and diaphragm shapes producing different voice pitch and harmonics. When the two intersect and we allow an AI to borrow our voice, we open a window to think about AI's embodiment or lack of it. We also begin to think about the hybridity enacted by deep learning systems that are trained on enormous amounts of our personal data.

The workshop was a practical exploration of the uncanny possibilities of using voices synthesised from our own. The technique points to the possibility of ‘deepfake’ voices. It also ties into history of ventriloquism, which is familiar as a form of entertainment, but has its roots in possession and prophecy. Ultimately, by putting words in the mouth of our cloned AI voice, we sasked what happened when untethered our voices from our bodies and lent them to another system.

Sections of participants’ voice recordings were used in the final installation.

Documentary

To record the thought process behind the work, I also made a short audio documentary. Listen below.

A short documentary documenting the Future Human residency at SPACE, London. How can we approach Artificial Intelligence using the voice as a symbol for identity and discourse. What kinds of voices can and could AIs have?

Screenshot 2019-11-15 at 13.35.09.png