FOLLOW US ON:
GET THE NEWSLETTER
CONTACT US
‘Blade Runner’ and ‘A Scanner Darkly’ reconstructed with an autoencoder

0_00_01scanblade123.jpg
 
“I’ve seen things you people wouldn’t believe,” said the Nexus-6 replicant Roy Batty at the end of the film Blade Runner.

Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears ... in ... rain.

It’s a great speech—one written by Rutger Hauer—which suggests this bad boy android or replicant has experienced a state of consciousness beyond its intended programming.

While we can imagine what Batty’s memories look like, we can never see or experience them as the replicant or android saw them. Which is kinda damned obvious—but raises a fascinating question: Would an android, a robot, a machine see things as we see them?

It is now believed that humans use up to 50% of their brain to process vision—which gives you an idea the sheer complexity involved in even attempting to create some machine that could successfully read or visualize its environment. Do machines see? What do they see? How can they construct images from the input they receive?

The human eye can recognize handwritten numbers or words without difficulty. We process information unconsciously. We are damned clever. Our brain is a mega-supercomputer—one that scientists still do not fully understand.

Now imagine trying to create a machine that can do what the human brain does in literally the blink of an eye. Our sight can read emotion. It can intuit meaning. It can scan and understand and know whether something it inputs is dangerous or funny. We can look at a cartoon and know it is funny. Machines can’t do that. Yet.

A neural network is a computer system modeled on the human brain and nervous system. One type of neural network is an autoencoder.

Autoencoders are “simple learning circuits which aim to transform inputs into outputs with the least possible amount of distortion.”

Here’s a robotic arm using deep spatial encoders to “visualize” a simple function.
 

 
Terence Broad is an artist and research student at Computing Department at Goldsmiths University in London. Over the past year, Broad has been working on a project reconstructing films with artificial neural networks. Broad has been

training them to reconstruct individual frames from films, and then getting them to reconstruct every frame in a given film and resequencing it.

The type of neural network used is an autoencoder. An autoencoder is a type of neural net with a very small bottleneck, it encodes a data sample into a much smaller representation (in this case a 200 digit number), then reconstructs the data sample to the best of its ability. The reconstructions are in no way perfect, but the project was more of a creative exploration of both the capacity and limitations of this approach.

The resultant frames are strange watercolor-like images that are identifiable especially when placed side-by-side with the original source material. That they can reproduce such fast flickering information at all is, well, damned impressive.

Among the films Broad has used are two Philip K. Dick adaptations Blade Runner and A Scanner Darkly, which is apt considering Dick’s interest in androids and asking the question “What is reality?”
 
Much more after the jump…....
 

READ ON
Posted by Paul Gallagher
|
05.26.2016
10:49 am
|