rulururu

post Meeting with Evan Raskob

April 4th, 2008

We were stuck in jetlag land last night. Pablo finally slept about 2am and then we were awake till six am. Woke up for an 11am meeting with Evan Raskob, the programmer working on the Chameleon Project.

We worked through some of the algorithms, but really we need to sit down with Chris Frith, the social neuroscientist working on the project. We looked at the piece that explores the propagation of emotions. We slowed it down, played with scale and paced the propagation with it so we could see what was going on. Its a really complex set of probabilities that is hard to understand, Especially in the midst of jetlag. Its looking much better than the first version, but its pretty essential to sit down with both Chris and Evan for a few hours so we all make sure we are understanding each other.

propagation031.jpg

We discussed what was next. We need to try a version of the propagation piece that only captures the facial expression and not upper body. We need to experiment with how it looks spatially – can it work in a row? A cross? What is the best way for it to work? Can the figures just be facing each other, as if in dialogue?

We need to get the version of the two heads facing each other working in Processing.

kevinandthomas011.jpg

kevinandthomas021.jpg

kevinandthomas031.jpg

We also need to work with the two faces one and investigate ways of scouring the web to find appropriate text to transpose onto the work. We will try to the api of wefeelfine.org – the api , for example .

Returned XML samples:

The API is free under the Creative Commons Attribution-NonCommercial-Sharealike license ( http://creativecommons.org/licenses/by-nc-sa/2.5/ ).

Sites that use this API must provide attribution by including the following html on their site:

Powered by: We Feel Fine.

Finally, we discussed the integration of touch. Evan mentioned some one who is pretty switched on with dealing with haptics. It would be great to meet him next week.

post emotional contagion film

April 2nd, 2008

We did a four camera shoot in the Telus studio at banff – We had Maria Lantin, Director, Intersections Digital Studio Emily Carr Institute and also video artist Leila Sujir, myself and also Matthew Wild, my partner and Pablo Wild, my son who is now five months old.

The aim was to capture micro expressions and emotional contagion. The ability to read emotions in others and ourselves is central to empathy and social understanding. We are extremely sensitive to emotional body language as up to 90% of all of our communication is nonverbal. Emotions and body language spread in social collectives, almost by contagion. In daily encounters, people automatically and continuously synchronize with the facial expressions, voices, gestures, postures, movements and even physiology of others. Some responses happen unconsciously, in milliseconds. Science has revealed that these shifting muscle movements then trigger the actual emotional feeling by causing the same neurons to fire in the brain as if you were experiencing the emotion naturally. When you feel happy, your brain might send a signal to your mouth to smile. With emotional contagion, the facial tiny muscles movements involved in smiling send a signal to your brain, telling it to feel happy (Hatfield 1996). This is how emotions spread.

We organised it that the cameras formed a semi – circle. One camera focused on Maria, one on Matt, one on Leila and one on myself. We talked for an hour. I think, in the future when I have time to look at it – it will become a short film called ‘Mimesis’ – but basically – its sort of an exercise to understand emotional contagion over time.

So, live action will provide the source material of “Mimesis”. It was important that the event take place with in a controlled and well-lit space such as a studio, and the Telus studio at the Banff Center was great for it. Four digital HD video cameras on tripods focused on the upper body area of the four participants, monitoring interplays of nuances.

In the future, through animation compositing techniques, the video will be slowed to reveal delicate interplays of communication. The voice will be stripped, so the body language can be isolated and amplified. It was shot on a black background – in order to focus on the micro-movements of body language. An important area to deal with is pace. First cuts show that working a ten percent for a single channel work doesn’t really work.

as far as treating hte footage – Other than delicate layering techniques, levels, keying and masking, I imagine the vision will be routed in reality. Each emotional connection will be synched through time, making the piece a scientific documentary of interplay as much as a poetic amplification of the search for empathy. The slowing down of pace will allow the viewer trace the nuances of communication. The rhythms of emotional contagion will drive the editing style and effects. Once in a while four heads will fill the screen to document the flow of understanding between each other; occasionally the piece will focus on one head at a time, revealing the nuances of micro expressions; every so often compositing techniques will be used, delicately over-layering the four faces, merging them, so the interplays over emotions are traced? The grading of the footage will be strong to emphasizing shadows and highlights. At times the piece will focus on uncomfortable and nervous moments of silence, building to focus on more contagious elements, e.g how laughter, yawning and touching the face travels through out social groups. The lighting was quite dramatic. Already I am thinking that I should have focused on the faces more.

Another potential way to look at it could be by concentrating on the moments when we are uncomfortable, confused, bored. I am not sure how to work with it, but so far the rushes look good. It will be interesting to see what Chris Frith, the social neuroscientist I am working with, thinks of it. Important to think about pacing – a sort of ramping or something.

stage012.jpg

mocked up set up for shoot. In the end we didn’t use a dinner party setting, we just drank wine. Interestingly, after the cameras were running for about ten minutes we all forgot about the cameras and the studio setting and just got immersed in conversation.

img_0876.jpg

beginning ideas for treating the footage.

post reading/looking

March 17th, 2008

Bas Jan Ader/ I am too sad to tell you.

untitled-20.jpg

I’m Too Sad To Tell You (1971) is a three minute and twenty one second video of the artist, Bas Jan Ader, inexplicably crying. The fact that we’re not told why he’s crying puts our own reaction to the work on very shaky ground. Generally, it’s the audience that’s supposed to weep in front of artworks, not the other way around.
Bas Jan Ader practiced a romantic kind of conceptual art which involved ideas of falling, failure, sadness, and the sublime, among other things. His last project, part of a three part work entitled In Search Of The Miraculous, involved a sailboat trip from Cape Cod to England in July of 1975. He lost radio contact three weeks into the trip and wasn’t heard from again. Less than a year later his body was found off the coast of Ireland.
I’m Too Sad To Tell You is part of an exhibition of currently showing at Perry Rubinstein Gallery in New York. It’s up through the 22nd of December.

Legend of the fall – photographer Bas Jan Ader
ArtForum, March, 1999 by Bruce Hainley

E-MAILPRINTLINK
The artist is crying and too sad to tell anyone why. A postcard with the dated note – “Sept. 13 1970. I’m too sad to tell you.” – shows Bas Jan Ader racked by tears. Whatever caused the tears to flow (the artist never publicly stated the reason) is ultimately beside the point. And yet Ader reenacted his private sadness, restaged it, photographed it to mail to others. While his piece retains a “real” sadness, it keeps vital the artifice and melodrama inherent in placing himself before his own camera while crying. Almost all of Ader’s work pulsates with a crisis of some personal intensity. His sincerity is sincere – until it’s not only sincere. Certainly connections exist between the postcard’s sad note and the ominous and purely theatrical qualities of some of his early, simple wall texts (“Please don’t leave me”; “Thoughts unsaid then forgotten”) and carefully chosen titles, like Farewell to Faraway Friends, a photograph of a lone Ader standing on the coast, framed by the setting sun on the horizon – a photo whose sincerity is toyed with by the kitschy, touristy “sunset” colors. To look at this another way, consider for a moment: If I told you that during the month I’ve been thinking about Ader I cried several times, and that I’m crying right now, would you buy it?

Ingres’s Comtesse d’Haussonville

ingress-comtesse-dhaussonville.jpg

Duchenne_de_Boulogne

untitled-3.jpg

untitled-4.jpg

19th century neurologist who worked with electricity to stimulate/simulate emotion. Also published a collection of photographs (new technology at the time) where he created a database of images documenting his electrical experiments on the facial muscles and the emotional states it rendered.

Gary Hill – Tall Ships

garyhill02.jpg

gary-hill.jpg

Acconci’s interest have been persistently psychological and interactive. Pushes viewers mentally and sometimes physically into situations they might preferably avoid. face to face with primal emotions/childhood memories – either his own or theirs. Initially personal to the point of exhibitionism, his early work often exposed his body and his innermost thoughts – a kind of stream-of-consciousness monologue. an artist more interested in process than the final product. creates a laboratory full of apparatuses that test one’s tolerance for varying degrees of confinement and action, for intimacy with oneself or the artist.

Vito Acconci – Three Relationship Studies, Vito Acconci, video still, courtesy Electronic Arts Intermix

acconci.jpg

Intensely personal, the films document a range of physical and psychological explorations of the self in relation to others, ones own body, and the film/video camera.

acconi02.jpg

Vito Acconci, Gargle/Spit Piece 1970, 3 min, color, silent, Super 8 film

The artist, sitting naked, takes water from a pot into his mouth and gargles; he spits it out onto his stomach and groin, transferring the water from one “container” (the pot) to another (his body).

Face to Face Vito Acconci 1972, 15 min, color, silent, Super 8 film

acconci03.jpg

In this exercise in nonverbal communication, Acconci explores facial expressions, and their psychological resonance, as a mode of performance narrative.

Abramovic, Marina; Ulay
«Light/Dark»

ambromavic.jpg

In a given space
We kneel, face to face. Our faces are lit by two strong lamps. Alternately, we slap each other’s face until one of us stops

“Trying to make theatre out of a kind of performance art that involves testing the limits of physical and men tal resistance raises special problems. One of Abramovic’s most famous performances is Light/Dark, in which she and Ulay slapped each others’ faces, increasing in speed till they could go no faster. Laub decided to “serialise” it by using several couples. “The difference between performance art and, say, repertory theatre, is that when Marina and Ulay decided to do something as strictly physical as slapping each other in the face for 20 minutes, they didn’t really care if they ended up in hospital the next day, because they were very committed and they didn’t have to reproduce the piece,” he said. “It’s sort of hard as a director to ask these young people to rehearse that.” He had managed, he said, “by apologising a lot”. And maintaining a stock of ice packs. A book on the making of The Biography Remix includes a photograph of one performer on her back at the end of a rehearsal with ice packs on one knee and the side of her face.”

Marina Abramovic & Ulay. Imponderabilia, 1977

ambromavic04.jpg

Marina y Ulay. Performance “El grito”
Imagen de video

ambromavic05.jpg

A scene from The Biography Remix by Marina Abramovic

nauman.jpg

Bruce Nauman’s installation of three fountain sculptures

nauman02.jpg

b from Studies for Holograms (a-e) (1970)

post planning the project

March 8th, 2008

The last few days have flown. Sitting in my studio looking out the the amazing mountains of Banff. The residency has been great. The week has been full of artist presentations.

Working on the stages of Chameleon: mapping it out in a way that might be clear to all collaborators.

1. ethnographic style film to understand emotional contagion (adam kendon pub experiment)

stage01.jpg

2. live tool to understand emotional contagion and isolate micro expressions, most contagious gestures/expressions.

stage022.jpg

3. working on emotional algorithms

stage031.jpg

4. Looking at emotional algorithms with context (live feel from web or example live feed of – wefeelfine.org)

stage03b1.jpg

5. Looking at emotional contagion and how it works in social groups – the propagation of emotions (making the algorithms of stage 3 more complex and networked)

looking at the propagation of emotions

6. Bringing in another mode of interaction into stage 5? – for example touch. If we touch the work (the act of touching is very personal) and also we know that the audience member needs to be near to the screen – which makes it easier for the emotion expression software to read the participant (not so spontaneous)

stage4b.jpg

7. Integration of real time facial expression software. Using a computer with camera embedded test how it works with stage 5 and 6.

stage051.jpg

8. Introducing multi-participants – three emotion recognition cameras/three networked screens.

stage061.jpg

9. Idea for final version, up to 20 networked screens/ 8 real time facial recognition cameras.

stage07.jpg

post organising stages.

March 1st, 2008

Had another meeting with Rana. Discussed stages and who supervises what parts of the project she should supervise and what we should achieve before I come back to the Media Lab on the 29th March.

stage one: Emotional Algorithms

1. working out emotional algorithms with Chris
2. reshoot database in banff
3. Contact Evan to see if the project can work online so all collaborators can view it?

stage two: Look at the propagation of emotions

1. Look at more complex algorithms and how they spread between each other.
2. reshoot this still database in banff (using still photography) (need camera?)

stage three: Multi-modal interaction?
1. test touch as a further mode of interaction. (touch will bring audience closer to the portraits – making the cameras easier to read them – could be major issues with the cameras reading the audience more spontaneously, and this could potentially resolve this problem.
2. shoot data base of pablo. As the audience touches the portrait of the baby – becomes like a live tamagochi?

stage four: integration
1. Integration of Rana’s technology into project. We will do this using max – her technology will spit out info every few frames.
2. Look at one camera – running on laptop – interacting with digital portraits.

thoughts:
how does sound work with the project. ? Talking with banff about what and when to record in banff.

Should we continue to pursue a live emotional contagion tool to really understand how social groups begin to build trust – also what are the most emotionally contagious actions?

talking a lot about engagement – scale, pace, dynamic. What is the immediate response? How do you get some one to want to engage with the work?

how is the imagery treated? – how we look at the face and read emotions changes in different emotional states?

How much can you abstract the image to amplify the emotion?

An initial video database and still database will be shot in banff.

img_0628.jpg

img_0626.jpg

img_0627.jpg

post Stanislavsky technique

February 23rd, 2008

how do you embody an emotion when a camera is pointing directly at you? – with out artifice, feeling it is real. I have tried a few techniques – shooting over extremely long sessions, shooting after extreme exhaustion and also working with clinical hypnotist – psychologist David Oakley with Hypnosis. The visuals for this piece will most likely use actors, acting out different emotional states.

I have started talking to Caravanserai – http://www.caravanseraiproductions.com/

Filip Aiello, producer for the Caravanserai Acting Studio contacted sarah Blakemore’s research group at the institute of cognitive neuroscience. They are starting a production about the human brain and how it functions in regard to emotions and instinct. They wanted to make contact with some neuroscientists – talk about mirror neurons, etc. e sre going to meet in London to discuss if the group should be involved with the project – looking at method acting – the stanislavsky technique.

post Evan’s photos of the Emotional Contagion Event at the Dana Center

February 15th, 2008

img_1710.jpg
Harry, Neil and myself, getting ready to talk, Dana Event

img_1702.jpg

Pablo taking part in the live event

img_1697.jpg

evan setting up upstairs.

img_1682.jpg

The first stage of Chameleon

post Emotional Algorithms.

February 13th, 2008

Spent day working on first prototype. The images need replacing – will shoot stills over the next few days, maybe tomorrow? Also, it is rather blunt – one image for disgust, anger etc. Basicall – shot in profile – seven images for each emotion. The male and female face each other. Each others emotional response triggers the aprropriate emotional response. Looks good, but lighting isn’t right. should be shown on two monitors. flat screen. Matt and I can pose tomorrow?. Also want to shoot pablo looking into the mirror for the first time. tomorrow?

Looking at the algorithmic codes that trigger the work. Need more work. Will send onto Chris Frith and Dylan evans. He sent through these questions last week, a lot of which I don’t know the answers for. Reading Dylan’s short book on emotions at the moment. Simple, but great. Chris’s book arrived today.

Dylan Evans

“Facial feature tracking is hard enough. My PhD student in Bristol was using Active Appearance
Models for tracking, but this re-quires manually labelling of training footage, which is tedious and introduces errors. Automatic landmarking algorithms enable a more accurate ?tting. Do you know what the MIT team is doing to track facial features?”

emotion[EMOTION_NEUTRAL][POSSIBLE_RESPONSE] = new Array(EMOTION_NEUTRAL, EMOTION_NEUTRAL, EMOTION_HAPPY, EMOTION_SAD);
emotion[EMOTION_NEUTRAL][POSSIBLE_FOLLOW_UP] = new Array(EMOTION_NEUTRAL, EMOTION_HAPPY, EMOTION_SURPRISED, EMOTION_DISGUSTED, EMOTION_ANGRY, EMOTION_SAD);

emotion[EMOTION_HAPPY][POSSIBLE_RESPONSE] = new Array(EMOTION_HAPPY, EMOTION_SURPRISED);
emotion[EMOTION_HAPPY][POSSIBLE_FOLLOW_UP] = new Array(EMOTION_NEUTRAL, EMOTION_HAPPY, EMOTION_SURPRISED);

emotion[EMOTION_SURPRISED][POSSIBLE_RESPONSE] = new Array(EMOTION_NEUTRAL, EMOTION_HAPPY, EMOTION_DISGUSTED, EMOTION_ANGRY, EMOTION_SAD);
emotion[EMOTION_SURPRISED][POSSIBLE_FOLLOW_UP] = new Array(EMOTION_NEUTRAL, EMOTION_HAPPY, EMOTION_DISGUSTED, EMOTION_ANGRY, EMOTION_SAD);

emotion[EMOTION_DISGUSTED][POSSIBLE_RESPONSE] = new Array(EMOTION_SAD, EMOTION_ANGRY, EMOTION_DISGUSTED, EMOTION_SURPRISED);
emotion[EMOTION_DISGUSTED][POSSIBLE_FOLLOW_UP] = new Array(EMOTION_SAD, EMOTION_ANGRY, EMOTION_NEUTRAL, EMOTION_SURPRISED);

emotion[EMOTION_ANGRY][POSSIBLE_RESPONSE] = new Array(EMOTION_SAD, EMOTION_ANGRY, EMOTION_SURPRISED, EMOTION_DISGUSTED);
emotion[EMOTION_ANGRY][POSSIBLE_FOLLOW_UP] = new Array(EMOTION_SAD, EMOTION_ANGRY, EMOTION_SURPRISED, EMOTION_NEUTRAL);

emotion[EMOTION_SAD][POSSIBLE_RESPONSE] = new Array(EMOTION_SAD, EMOTION_ANGRY, EMOTION_SURPRISED);
emotion[EMOTION_SAD][POSSIBLE_FOLLOW_UP] = new Array(EMOTION_NEUTRAL, EMOTION_SAD, EMOTION_ANGRY);

af09.JPGbf13.JPG

post reading emotions

February 13th, 2008

In different emotional states, we direct our gaze to different parts of the face first – Using eye tracking – we look at eyebrows/brow area first – happiness we look to the mouth. Could use this to advantage – could abstract the imagery real time. When healthy people look at faces, they spend a lot of time looking at the eyes and the mouth, as shown in the figure below. People with damage to the amygdala, with agenesis of the corpus callosum, and with autism, all look at faces abnormally.

emotiontracking.jpg

using eye tracking to work out how we view a face to make sense fo the expression

studies02.jpg

reworking Paul Ekmans FACS database. Bringing particular features of the faces forward – does emotion still get read? Is it stronger? How much can ou abstract the image before the emotional expression gets lost?

ruldrurd
   Footer Anatlogo Arts Sa
Australian Government The Visual Arts Strategy