Barcelona Cultura

“What it means to be human, is perhaps not to remain human at all”. Interview with Stelarc

Stelarc is one of the most unique artists of our times. His work focuses on the field of bioart and performance, exploring the ability of the human body in relation to technology. The artist has used his own body as a means of research, subjecting it to extreme situations of deprivation and sensory multiplication or electronic insertions into the skin. In this new interview, we talk to him about bioart, technology and his long career.

Briefly, can you tell us what bioart is and how it is related to your work and career?

I understood bioart initially as using living cells, instead of inert material, in creating an artwork. It meant artists working in labs, following certain protocols, combining aesthetic and conceptual concerns with scientific methodology. Bioart has since diversified in definition and approach. No longer in white coats, artists have moved out of labs into the natural environment, incorporating such living systems as fungi and slime and using natural incubation rather than lab machines. My initial interest in bioart came about when I grew rat muscles cells, myoblasts, when I had a residency at Carnegie Mellon University in Pittsburgh in 1996. But at that time, I had no artistic interest in exhibiting a petri dish on a plinth. But it did generate the idea of a soft prosthesis and an extra ear seemed challenging but plausible. This was first 3D modelled as third ear on the side of my head. But it took 10 years to find 3 plastic surgeons and to get funding to surgically construct an Ear On Arm, with tissue ingrowth and vascularisation resulting in the ear construct becoming an integrated and living part of my body. Previously, in collaboration with SymbioticA, the ¼ Scale Ear, 2003, was realised. My ear was scanned, scaled down, a biodegradable scaffold 3D printed from that data and seeded with human cells. The outcome was a lump of living tissue in the shape of a small ear. With the Partial Head project, 2006, the artist’s face was scanned as was a hominid skull. A digital transplant was done resulting in a post-hominid but pre-human composite. The 3D printed scaffold was seeded with living cells to attempt to grow a layer of living skin. It got contaminated within days in its custom incubator and had to be fixed in formaldehyde for the remainder of the exhibition at the Heide Museum of Modern Art, Melbourne.

In the 90s, you were one of the first to propose that the human body would merge with our daily technology. It seems that two decades later, time has proved you right: we all permanently remove several computers (from smartphones to smartwatches) and Just yesterday Apple presented one more device, augmented reality viewers, which blur the boundaries between the information network and our senses. With the distance of time, how similar is the world today to what you imagined in your projects of 20 years ago? In what sense have you been surprised by the evolution of events, and in which have you been disappointed?

Well, the mentality was not of someone speculating on an imagined future. Rather using state-of-the-art techniques and technologies to explore and interrogate what a body is and how a body performs in the increasingly complex terrain that it now inhabits. And oh, much earlier than the 90s. In the late 1960s, I created helmets that split your binocular vision and an immersive, rotating sensory compartment that you inserted your upper body into. Between 1973-1975 I made 3 films of the inside of my body using endoscopy and in 1976 began my Third Hand project, completed in 1980. The Third Hand was sophisticated enough at that time to get invitations from the Jet Propulsion Lab in Pasadena and the Johnson Space Centre in Houston to demonstrate the EMG actuated hand to the Extra-vehicular Activity Group. In the early 1990s I was performing with Industrial robots and also performed with my virtual body. In From 1995 Fractal Flesh, Ping Body and Parasite were online interactive performances where people in other places could access and remotely actuate the body, where the body becoming a crude barometer of internet activity and the body performing involuntarily to the promptings of search engine images and sounds. What has happened since is that we have transitioned from wired to wireless connectivity. From mega machines to micro and nano scaled sensors and robots. From industrial robots to humanoid robots. Yes, wearable heads-up displays continue to be rather cumbersome and clunky but research in embedding micro-LEDs in contact lenses promises to display information superimposed in your field of vision without the social problems of wearable goggles or glasses. What is promising with recent AR and XR headsets are the more seductive and intuitive control with gestures and eye motions, increasingly making seamless the operation between actual and virtual modes of operation.

Do you think that the world of design and technology can redesign the relationship with our body and with our senses?

Oh, that is already happening. In both an existential and ontological sense. The body has become a contemporary chimera of meat, metal and code, performing beyond the boundaries of its skin and beyond the local space it inhabits. Our machines amplify our bodily power and propel the body at great speeds. Our sensors and instruments extend and enhance the body’s sensory perception. This generates unexpected paradigms of the world we inhabit. We now navigate from deep time to physical nanoscales to virtual non-places. The body now inhabits abstract realms of the highly hypothetical and of streaming subjectivity. But the digital is not merely the realm of the virtual. The dead, the brain-dead, the yet to be born, the cryogenically preserved, the prosthetically augmented, synthetic life and artificial life all now share a material and proximal existence. What it means to be human, is perhaps not to remain human at all. One development I like to use is the twin turbine heart. In 2011, it was implanted into the chest of a terminally ill patient. He lived long enough to fully test this smaller, more robust artificial heart. Interestingly it circulates the blood continuously without pulsing. So, in the near future you might rest you head on your loved ones chest. They are warm to the touch, they are sighing, they are speaking, they are certainly alive – but they have no heartbeat. In 2017 Tina Gibson gave birth to Emma Wren from a 25 year old frozen embryo. Her mother was 25 years old, the same age as the embryo. It will be possible to keep 50 year old frozen embryos viable. You might be born on the death bed of your sibling. Or you may be born generations after the death of your biological parents. What this means is that we are disconnecting our reproductive processes from our individual life-spans.  And if we can engineer an artificial womb and bring to bear a healthy child then life would not begin with a biological birth. And if we can replace malfunctioning parts of the body with stem-cell grown, 3D printed or artificial parts, then we need not die a biological death. So how do we define human existence that does not begin with birth nor ends in death. In fact, we will no longer die biological deaths, we die when our life-support systems are switched off. Having said all this we need to understand that these ideas are both contingent and contestable. A future is not of certainties but of probabilities. Not what will be but what might be. The future is not a future if it is not of the unexpected.

How do you think the current explosion of thought around artificial intelligence changes our relationship with the body, and with our senses?

At a time when the individual body can be threatened existentially, by fatally being infected by biological viruses, what becomes apparent now is that the human species is confronted by the more pervasive and invasive ontological risk of infection from its digital entities and viral algorithms. Yes, AI powered systems will facilitate and speed up our operations and interactions. The problems will begin to occur when smart systems become smarter than the humans who have initiated them. AI amplified prosthetic attachments and implants will simultaneously enable more intimate and intuitive interfaces but also result in a reliance that might become problematic. Without sounding dystopian, the implications of AI will go beyond its relationship to the human body. The impact of AI, rather than being an aide to the human will increasingly allow machine systems to become intelligent and autonomous.  As machines become increasingly networked, they will be able to increasingly communicate and collaborate. What will be interesting will not be the development of Artificial General Intelligence but rather when an Artificial Intelligence evolves into an Alien Intelligence. In fact, the first signs of alien life form might well come from this planet.

What is the main idea that you are interested in exploring now through your work?

Well, there has not been a linear progression from the physical, to the robotic, to the virtual. My projects and performances oscillate between these different modes of operation. And in fact, we all now are expected to perform in Mixed Realities. To seamlessly slide between actual and virtual systems. Between offline and online operation. Movatar, 2000, is an inverse motion capture system. Instead of a body animating a computer entity, it allows an avatar to perform in the real world by possessing part of a physical body. The motion exoskeleton interface of the upper torso has only 3-degrees-of-freedom for each arm, but this allows 64 possible combinations. The body is split not from left to right but from torso and legs. Whilst the arms are actuated by the avatar's code, the legs are free to move, turn and press floor sensors that in turn can modulate the avatar's behaviour. Changes in rhythm, mutation and complexity of posture strings, for example, can be affected. The motion prosthesis can be seen as the muscles of the avatar in the real world. This is a actual-virtual interface where a dialogue is generated between a biological body and a data construct. The body as both a possessed and performing body. In the Re-Wired / Re-Mixed performance 2017, for five days, 6 hours every day continuously, I could only see with the eyes of someone in London, I could only hear with the ears of someone in NY whilst anyone, anywhere, at any time could access my right arm via the exoskeleton and remotely animate its motions at the Perth Institute of Contemporary Art. A distributing of agency and a sharing of senses. Effectively, the body was in three places at once – two virtually in London and New York and one physically in Perth. Originally, I wanted to stream using a 360 camera, but this wasn’t possible at the time. And it would be excellent to realise this project not merely streaming from cameras and microphones but wiring directly into a remote person’s vision and hearing. And of course, it was not necessary to use an exoskeleton to actuate my arm. That might be done via muscle stimulation from promptings from someone’s brain. It would be an interesting to subjectively experience a remotely connected eye, a remotely connected ear and a remotely connected brain to an arm and a body elsewhere…

Barcelona is positioning itself as a center of digital culture. After your visit to the city to participate in Digital Impact, do you have any future projects in the city?

The NewArt Foundation is funding a project which will be completed in 2024 for a series of performances in Europe. It will be an amplified omnidirectional and mobile exoskeleton with a robotic arm that the artist can perform with. But it will also be a standalone interactive installation where people in other places will be able to choreograph the robot’s movements and the robot arm’s functions via a graphical interface. Choreographing these movements and functions will also compose the sounds of the robot.

 

 

Ajuntament de Barcelona