Fly Printer - Extended: an artwork with fruit flies, artificial intelligence and humans
posted by Laura Beloff on 19 April 2016

The version Fly Printer - Extended is created within the Hybrid Matters project, and also the final development for the Fly Printer - Danish Crown. This on-going project presents my interests into a merger of biology and technology.

Here is the story how it all started with the Fly Printer:

The Fly Printer has been an on-going project since 2014. The concept and the first version of the work was created in collaboration between Laura Beloff and Maria Antonia González Valerio in the summer 2014 within a residency at Cultivamos Cultura that is organized by Marta de Menezes. The first exhibitable version was titled: The Fly Printer; Prototype No. 3.  The piece included fruit flies in a spherical habitat, food for them that was mixed with printer inks in cyan, magenta, yellow and black, and of course a book where they could create their images.

At the time we wrote (excerpt): "The standardization of images and of machines that relate to the production of images implies an alteration of the gaze. The eyes observe what the machine generates as reality, as icon. The homogenous methods of the visualization apparatus represent the flattening of the options to produce images. How many equalitarian apparatus do we have to reproduce the world by ways of similar appearances?
The machines being uniform and the production of images-type teach the eye to adjust itself to what fits inside a frame. The epoch of the ubiquitous screens is also the one of the reduction of the possibilities of perception: one sees the same in the same.
But what is the outcome of a machine or an artifact, like the Fly Printer, that is dislocated, that produces images that have no meaning, no instrumentality, that depict nothing in the world. These images, if they can be called like that, are phantoms, simulacra (Deleuze) fabricated beyond the human manipulation and intention.
In the Fly Printer a standard biological model organisms are used for replacing a standard part of our common printer technology. The work points to a divide between the engineered and the organic and shows a human aspiration for control of information and of biological species. Frustratingly, the work does not allow control over the flies and the printing surface; the flies decide whether it is suitable to print on the paper or on the glass sphere. In other words the results from this printer are uncontrollable, the resulting prints are random traces of biological processes." 

      

     

During 2014-15 Beloff continued to work on the Fly Printer making large amounts of tests on different forms, foods, conditions for the flies. The concept was kept the same, but a new form was developed that addressed specifically the Danish culture through the use of Danish design coffeepot parts (originally Santos coffeepot, later Bodum), which were assembled together to form the habitat for the flies. The habitat begun resembling the emblematic Danish crown, which gave the name for the new version: Fly Printer - Danish Crown. This printer-version produces different kinds of prints as the four colors are separated into their own compartments - nevertheless, after some time the dots appearing under each spherical compartment increase their color-scheme from mono- to multi-color as the flies fluently migrate from one compartment to another .

  

Gradually an extended idea begun emerging that addresses perception and interpretation of images, specifically human vs. machine perception. This version, Fly Printer - Extended, includes the concept of the earlier versions but now in combination with intelligent technology that interprets the produced biological traces of non-human organisms. This technologically extended version is finalized together with Malena Klaus, who has developed the software set-up with neuronal network learning system, supported by advice from Sebastian Risi and Joel Lehman.

 

 

Fly Printer - Extended 2016     (---> the parts in italic are written by Malena Klaus.)

The Fly Printer – Extended investigates my increasing interest in the merger of a human, a non-human, and an artificial entity.

The work is developed based on the previous versions of the Fly Printer. It contains biological organisms -fruit flies- that are treated in the work as image producing technology, but additionally it includes intelligent system with camera and convolutional neuronal network (CNN) for image recognition and interpretation. In other words, this artwork creates an autonomous system that firstly produces images; secondly, a machine vision system observes these images that are patterns produced by the flies; and thirdly, artificial intelligence (CNN) will interpret these images to itself and to a human observer.

FlyPrinter_testing_camera_and_papers

The work has been triggered by the fact that our societal infrastructures are using increasing amounts of automated and autonomous systems that rely on artificial intelligence, e.g. intelligent image recognition systems. However, these systems, even if one can claim that they are capable of learning, are still limited. One of the challenges has been an over-interpretation of the images by the artificial system, which highlights differences in how neuronal networks and humans recognize and project meaning to images and objects. This means that it is possible that when an image has become unrecognizable for a human, an intelligent neuronal network system still believes in over 99% certainty it to be a familiar object for a human perception (Nguyen, Yosinski & Clune 2015.) This artwork plays ironically on this problem – the system aims at interpreting images for human viewers that are produced by a fruit fly community and which are in the first place unrecognizable for a human perception beyond abstraction.

In the work the interpretation of images produced by the fruit fly community is a continuous process – as the image develops with more dots appearing on the paper, also the neuronal network refreshes its interpretation every 10 minutes. The camera takes images that are fed, after a few image-processing steps, into a neuronal network for image recognition. This generates classifiers of what it believes to see in the image. These classifiers then will be visualized and publicized on twitter together with the initial image.

Humans are amazing pattern matching machines; we are trained to quickly recognize and classify all sorts of patterns. When we see a lion we know immediately that it is one and furthermore we will look for other clues that tell us if we are in danger or not. Whereas computers are quite limited, they only do what software tells them to do, nothing more. For example, a human-like classification of images seemed almost impossible until the development of sophisticated neuronal networks. The basic idea of neuronal networks is the imitation of the structure of the human brain. To achieve this, we use artificial neurons. One type of artificial neurons is called a perceptron (Rosenblatt 2011). A perceptron can have multiple inputs and produces a single binary output. The links between the input and the perceptron are weighted and the perceptron has a threshold, which determines if the output is 0 or 1. With a perceptron we can tackle linear problems; for more complex problems we can use multiple perceptrons.

The multilayer perceptrons in combination with a sigmoid function can now be trained with training data and the use of the back-propagation algorithm. For this Fly Printer-project we used a variation of neuronal networks, called convolutional neuronal networks (CNN). These are designed to take advantage of the 2 dimensional structure of an input image by connecting a multilayer NN with subsampling and pooling. The idea behind it is that certain patterns in images are not bound to their position and patterns, once learned, can be reused for other parts of the image. The CNN consists therefore of convolutional and subsampling layers followed by fully connected layers. The input for CNN is an image with width*height*channels. The convolutional layer then filters the image and creates a locally connected structure, which gets convolved with the image to create feature maps. These maps get pooled over p*p continuous regions, where p ranges between 2 and 5; the smaller the image the smaller p. The following densely connected layers are identical to the layers in a normal multilayer neuronal network.   

(above sections written in italic are by Malena Klaus.)

Below is an image of the current set up at Nikolaj Kunsthal in Copenhagen (May-July 2016).

The projected image is showing the image with dots produced by the flies living within the spherical transparent habitat. The dots are connected with lines and the image is sent through CNN system which interprets the image to us. What the AI system thinks it is recognizing is written surrounding the image: e.g. UMBRELLA 14%, RADIO TELESCOPE 12%, PIRATE 11% ...   A new refreshed image is sent to the system every few minutes.