Playing Perfectly: On Outsourcing Creation to Machines

Text by Katherine Oktober Matthews.
First published in GUP Magazine, Issue #54 - Playful, The Netherlands, August 2017.

The Trophy Camera is only interested in perfect images, and deletes anything deemed less. There’s no viewfinder, so you don’t get any real insight into what the camera sees, and you can’t review your failed images to see what might’ve gone wrong. In short, the camera knows better than you what makes a great image, anyway.

A conceptual project by photographer Max Pinckers (b. 1988, Belgium) and media artist Dries Depoorter (b. 1991, Belgium) the Trophy Camera v0.9 is a technical device that combines image analysis with data from winning images from the World Press Photo awards. Based on a comparison between the image you’ve taken and previously winning images, your photo is given a grade – and is either declared a winner, where it’s uploaded to a website for viewing, or it is deleted immediately.

We spoke in this interview with Pinckers to learn more.

Tell us about your collaboration with Dries.

I’ve always been thinking about, or slightly bothered about, certain conventions and tropes in photojournalism. I’m interested in why this happens, how they’ve been created and how they finally arrive at the viewer – because there’s a very long process between the photographer that makes a picture and the point that it essentially ends up on the front page of a newspaper. Dries, who studied at the same school as me in Ghent, is a conceptual media artist, and he had been working on some very similar ideas about what makes a ‘work of art’, and so we decided to combine our ideas.

How does the Trophy Camera determine what is a good image?

We’re using World Press Photo award winning images as a basis, but it’s not specifically about them as an organisation. There are APIs available from large companies like Google that do image analysis. Dries combines a couple of these different APIs to have a unique algorithm that tags these WPP images based on their contents – like ‘woman’, ‘gun’, ‘facial hair’ – and assigns a weight to each of those tags. This is then used to evaluate the value of any pictures you take. You point the camera, you push the red button, and then on the screen, it tells you what it sees in text. It says “Thinking…” while it takes a moment to compare it with the data set, and then gives you a grade. If it’s 90% or more, it says “Winner” and uploads it to a website.

But this is prototype still. We’re developing version 1.0 to have more data for better analysis, and cellular connection instead of just wifi capabilities.

At this early stage, the images are really not that great, but it does trigger you to wonder what is it about each of them that makes it a ‘winner’.

Exactly. It shows how, if you give the algorithm a picture of a grieving mother over the body of her just deceased child, a computer doesn’t emotionally interpret the picture. It sees a woman, it sees a child, and maybe it can recognise ‘grieving’ because it knows from all the other pictures that it’s supposed to be grieving, but it doesn’t feel anything.

This super-rational tagging is already one of the reasons why we see many of these recognisable tropes recurring in news and photojournalism. There are so many images produced and they all get uploaded to databases, with algorithms that automatically apply tags, so that the press desks can easily find something to publish without going through millions of images. So, computers already play a role in deciding which images we get to see through this system of tagging and databases, we just aren’t very aware of it.

Are you troubled by the technological capabilities that make such a camera possible?

Yes, of course, that’s very much the idea behind the project. If you look at the development of digital cameras from the beginning until now, the main thing that is being focused on and going really quickly is the automisation of picture-taking. This goes hand-in-hand with less control over the output, so less room for creativity. So, it’s more and more the machine and the technology that decides what images we make, because it’s fully automised. We all are still made to feel that we are in control somehow of the image-making process, but this is something that’s very much diminishing.

I think it’s a kind of scary thought that there will come a point where, if we have cameras that create images based on what we have made up to this point, and we then give that over to an automated machine to decide for us – what does that imply? We should be thinking about this.

The idea of the camera is essentially conceptual though. Would you be troubled if someone took this idea to market?

It’s funny, because some people don’t actually realise that it is an artwork; they think it will be commercialised someday and are waiting for it to appear in stores. But it would never work as a product on the market because people are looking to express themselves with the products they buy. They need to maintain a feeling of control, to some extent, when making photos.

As a photographer, do you feel threatened by the potential success of AI?

No, I don’t think so. As a photographer, what I’m constantly looking for are things that we all think we know, the set of parameters that we all recognise and all see, and then to try and challenge that. And through questioning that, we put ourselves in a position of really critically reflecting on things. And a computer is not going to be able to do that for us.

So, I’m not worried about taking good pictures, because if you give a child a digital camera these days, they will also make a good picture. I don’t feel attacked by that at all as a photographer. It’s much more about thinking about things, and questioning things, and trying to figure out how the world works in a different way. Machines can’t do that. Yet.