FOLLOW US ON:
GET THE NEWSLETTER
CONTACT US
‘Blade Runner’ and ‘A Scanner Darkly’ reconstructed with an autoencoder

0_00_01scanblade123.jpg
 
“I’ve seen things you people wouldn’t believe,” said the Nexus-6 replicant Roy Batty at the end of the film Blade Runner.

Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears ... in ... rain.

It’s a great speech—one written by Rutger Hauer—which suggests this bad boy android or replicant has experienced a state of consciousness beyond its intended programming.

While we can imagine what Batty’s memories look like, we can never see or experience them as the replicant or android saw them. Which is kinda damned obvious—but raises a fascinating question: Would an android, a robot, a machine see things as we see them?

It is now believed that humans use up to 50% of their brain to process vision—which gives you an idea the sheer complexity involved in even attempting to create some machine that could successfully read or visualize its environment. Do machines see? What do they see? How can they construct images from the input they receive?

The human eye can recognize handwritten numbers or words without difficulty. We process information unconsciously. We are damned clever. Our brain is a mega-supercomputer—one that scientists still do not fully understand.

Now imagine trying to create a machine that can do what the human brain does in literally the blink of an eye. Our sight can read emotion. It can intuit meaning. It can scan and understand and know whether something it inputs is dangerous or funny. We can look at a cartoon and know it is funny. Machines can’t do that. Yet.

A neural network is a computer system modeled on the human brain and nervous system. One type of neural network is an autoencoder.

Autoencoders are “simple learning circuits which aim to transform inputs into outputs with the least possible amount of distortion.”

Here’s a robotic arm using deep spatial encoders to “visualize” a simple function.
 

 
Terence Broad is an artist and research student at Computing Department at Goldsmiths University in London. Over the past year, Broad has been working on a project reconstructing films with artificial neural networks. Broad has been

training them to reconstruct individual frames from films, and then getting them to reconstruct every frame in a given film and resequencing it.

The type of neural network used is an autoencoder. An autoencoder is a type of neural net with a very small bottleneck, it encodes a data sample into a much smaller representation (in this case a 200 digit number), then reconstructs the data sample to the best of its ability. The reconstructions are in no way perfect, but the project was more of a creative exploration of both the capacity and limitations of this approach.

The resultant frames are strange watercolor-like images that are identifiable especially when placed side-by-side with the original source material. That they can reproduce such fast flickering information at all is, well, damned impressive.

Among the films Broad has used are two Philip K. Dick adaptations Blade Runner and A Scanner Darkly, which is apt considering Dick’s interest in androids and his asking the question “What is reality?”

This was also one of Broad’s artistic reasons in making an autoencoded reconstruction of the film Blade Runner :

Ridley Scott’s Blade Runner (1982) is the film adaption of the classic science fiction novel Do Androids Dream of Electric Sheep? by Phillip K. Dick (1968). In the film Rick Deckard (Harrison Ford) is a bounty-hunter who makes a living hunting down and killing replicants — androids that are so well engineered that they are physically indistinguishable from human beings. Deckard has to issue Voight-Kampff tests in order to distinguish androids from humans, asking increasing difficult moral questions and inspecting the the subject’s pupils, with the intention of eliciting an empathic response in humans, but not androids.

I won’t go into all of the philosophical issues explored in Blade Runner...but what I will say is: that while advances in deep learning systems are coming about by them becoming increasingly embodied within their environments; a virtual system that perceives images but is not embodied within the environment that the images represent, is — at least allegorically — a model that shares a lot with the characteristics of Cartesian dualism, where mind and body are separated.

An artificial neural network however, is a relatively simple mathematical model (in comparison to the brain), and anthropomorphising these systems too readily can be problematic. Despite this, the rapid advances in deep learning are meaning that how models are structured within their environments, and how that relates to theories of mind, must be considered for their technical, philosophical and artistic consequences.

One of the overarching themes of the story is that the task of determining what is and isn’t human is becoming increasingly difficult, with the ever-increasing technological developments.

You can read all of the technical background and artistic motivation to Broad’s project here.
 

 

 

The reconstructed film is surprisingly coherent. It is by no means a perfect reconstruction, but considering that this a model that is only designed to model a distribution of images of the same type of thing taken from the same perspective, it does a good job given how varied all of the different frames are.

 
Broad then tried reconstructing A Scanner Darkly:

After reconstructing Blade Runner, I wanted to see how the model would perform being trained on a different film. I chose the 2006 film A Scanner Darkly — another adaptation of a Phillip K. Dick novel — as it is stylistically very different from Blade Runner. Interestingly A Scanner Darkly was animated using the interpolated rotorscope method, meaning it was filmed on camera, and then every frame was hand traced by an animator.

The model does a reasonably good job of capturing the style of the film (though not nearly as well as the recent style transfer for videos), but struggles to an even greater degree in reconstructing faces. This is probably because of the high contrasted outlines and complexities of the facial features, as well as the exaggerated and unnatural frame-to-frame variation in shading.

 

 
In conclusion Broad writes:

In all honesty I was astonished at how well the model performed as soon as I started training it on Blade Runner. The reconstruction of Blade Runner is better than I ever could have imagined, and I am endlessly fascinated by the reconstructions these models make of other films. I will certainly be doing more experiments training these models on more films in future to see what they produce. In further work I would like to adapt the training procedure to take in to account the consecutive order of frames, primarily so the network can better differentiate between long sequences of similar frames.

 

 
Via Medium
 

Posted by Paul Gallagher
|
05.26.2016
10:49 am
|
Discussion

 

 

comments powered by Disqus