Interface

Coralie Vogelaar

A Research on Emotions by Pattern Recognition


Emotion recognition software analyses our emotions by deconstructing our facial expressions into temporal segments that produce the expression. This measuring system – called the Facial Action Coding System (FACS) – was developed by prof. Paul Ekman and Wallace V. Friesen and published in 1978. In this system, 43 muscles in the face are deciphered, which together can make a legion amount of movements and almost infinite combinations.

Emotion recognition software analyzes our emotions by deconstructing our facial expressions into temporal segments that produce the expression, called Action Units (AU; developed by Paul Ekman), and breaking them down into percentages of six basic emotions, happy, sad, angry, surprised, scared, and disgusted.


Visual artist Coralie Vogelaar worked together with actress Marina Miller Dessau to explore and train these deconstructed facial expressions according to the FACS. These studies have resulted in – amongst other works – the two-channel video installation Interface. In this video, Vogelaar uses the decoding system to turn the process around: Here – instead of detecting AUs – a computer is used to generate a random string of AUs. In this way complex and perhaps even nonexisting emotional expressions will be discovered. These randomly formed expressions, played in random order, are then analyzed again by professional emotion recognition software.

In the two-channel video installation Interface, one screen is representing real emotions that are being expressed by Miller Dessau where she reacts in real time to stories that are being told to her – resulting in an emotional rollercoaster composed in a 20 minutes single shot. On the other screen, the observed components of facial movements, which are re-enacted – by giving cues (FACS) – the day after are being shown. The result is a nonverbal exchange between computerized movements and their corresponding emotions. Hereby it seems the detection technique is not only triggered by the emotions but also the other way around.

We experience a computerized way of looking in which the world is made up of particles (isolated facial movements), that via a certain set of behaviours (structures or algorithms) is made visible. No pattern is an isolated entity but is supported by other patterns and the larger patterns in which it is embedded. It shows us details we had never noticed before and escapes the familiar narrative of looking at emotions and creates a new one. At the same time, we notice that machines – and also humans – are convicted of seeing the world in parts. Just like our gaze, it fixes itself on something, jumps from place to place and never sees the whole. It tracks what it is programmed to track. Emotion recognition software, for example, does not see tears because it is not programmed to track this.

Our own brain is also programmed, based on images and experiences we saw before. Because of this, we are perfectly capable of constructing (emotional) narratives between the two personas and give meaning to the contracting or relaxation of certain facial muscles.