rulururu

post emotional contagion film

April 2nd, 2008

We did a four camera shoot in the Telus studio at banff – We had Maria Lantin, Director, Intersections Digital Studio Emily Carr Institute and also video artist Leila Sujir, myself and also Matthew Wild, my partner and Pablo Wild, my son who is now five months old.

The aim was to capture micro expressions and emotional contagion. The ability to read emotions in others and ourselves is central to empathy and social understanding. We are extremely sensitive to emotional body language as up to 90% of all of our communication is nonverbal. Emotions and body language spread in social collectives, almost by contagion. In daily encounters, people automatically and continuously synchronize with the facial expressions, voices, gestures, postures, movements and even physiology of others. Some responses happen unconsciously, in milliseconds. Science has revealed that these shifting muscle movements then trigger the actual emotional feeling by causing the same neurons to fire in the brain as if you were experiencing the emotion naturally. When you feel happy, your brain might send a signal to your mouth to smile. With emotional contagion, the facial tiny muscles movements involved in smiling send a signal to your brain, telling it to feel happy (Hatfield 1996). This is how emotions spread.

We organised it that the cameras formed a semi – circle. One camera focused on Maria, one on Matt, one on Leila and one on myself. We talked for an hour. I think, in the future when I have time to look at it – it will become a short film called ‘Mimesis’ – but basically – its sort of an exercise to understand emotional contagion over time.

So, live action will provide the source material of “Mimesis”. It was important that the event take place with in a controlled and well-lit space such as a studio, and the Telus studio at the Banff Center was great for it. Four digital HD video cameras on tripods focused on the upper body area of the four participants, monitoring interplays of nuances.

In the future, through animation compositing techniques, the video will be slowed to reveal delicate interplays of communication. The voice will be stripped, so the body language can be isolated and amplified. It was shot on a black background – in order to focus on the micro-movements of body language. An important area to deal with is pace. First cuts show that working a ten percent for a single channel work doesn’t really work.

as far as treating hte footage – Other than delicate layering techniques, levels, keying and masking, I imagine the vision will be routed in reality. Each emotional connection will be synched through time, making the piece a scientific documentary of interplay as much as a poetic amplification of the search for empathy. The slowing down of pace will allow the viewer trace the nuances of communication. The rhythms of emotional contagion will drive the editing style and effects. Once in a while four heads will fill the screen to document the flow of understanding between each other; occasionally the piece will focus on one head at a time, revealing the nuances of micro expressions; every so often compositing techniques will be used, delicately over-layering the four faces, merging them, so the interplays over emotions are traced? The grading of the footage will be strong to emphasizing shadows and highlights. At times the piece will focus on uncomfortable and nervous moments of silence, building to focus on more contagious elements, e.g how laughter, yawning and touching the face travels through out social groups. The lighting was quite dramatic. Already I am thinking that I should have focused on the faces more.

Another potential way to look at it could be by concentrating on the moments when we are uncomfortable, confused, bored. I am not sure how to work with it, but so far the rushes look good. It will be interesting to see what Chris Frith, the social neuroscientist I am working with, thinks of it. Important to think about pacing – a sort of ramping or something.

stage012.jpg

mocked up set up for shoot. In the end we didn’t use a dinner party setting, we just drank wine. Interestingly, after the cameras were running for about ten minutes we all forgot about the cameras and the studio setting and just got immersed in conversation.

img_0876.jpg

beginning ideas for treating the footage.

post presentation of Chameleon Project (stage one)

April 2nd, 2008

Filed under: CHAMELEON PROJECT — Tina @ 3:50 am

I spent the month of march at the Liminal Screen Residency at the Banff New Media Institute. The main aim was to start working through some visual ideas of the Chameleon Project that I am working on over the Australian Network for Art and Technology’s Synapse Residency, which i am currently in the midst of.

I left the residency a few days ago – pretty exhausted actually. The days were full, and the nights were pretty full – and I was working hard – long shoots which were quite emotionally draining and then my nights with my little son, Pablo, who just turned five months.

Looking back – The residency was fantastic as usual. That was my sixth trip to Banff. Banff is a great place to immerse yourself in work – Great equipment, great recourses, great people. I was there with twelve other artists – similar in themes – but all very different. We spent the last few days in Open Studio – one day of viewing each others works and one day of critiquing each others works. Hugely valuable, but we all a bit exhausted.

img_0875.jpg

set up of the beginnings of the Chameleon Project . I was also supposed to present the project at the Media Labs sponsorweek this week, but I was pretty tired after banff, and then thought it would be better to show it here when the real time facial recognition technology developed by the Affective computing group at the Medialab is integrated into the work.

img_0874.jpg

img_0873.jpg

img_0877.jpg

Looking at the emotional algorithms – the great Lia Rogers, one of the work studies of the banff center, wrote up the first stage of chameleon – basically each persons expression – effects the others’ expression. Its still not done – the figures pretty much got stuck in the emotions of anger and sadness all day and never really got out of it. Good start. Need to show Chris and see what he thinkgs.

post Sponsor week at the MIT Media Lab

April 2nd, 2008

Filed under: CHAMELEON PROJECT,interesting research — Tina @ 3:34 am

img_0961.jpg

As part of the Synapse Residency, I am back at Cambridge this week – at the MIT Medialab – we timed it so I was here for sponsorweek. Sponsor week is a major twice a year event where the Media Lab sponsors have the chance to visit the lab and learn about the latest research. The Lab’s primary source of funding comes from more than 60 corporate sponsors whose businesses range from electronics to entertainment, furniture to finance, and toys to telecommunications.

Its a relationship that works for MIT, but I can imagine if you were working here it may be hard to get your head around. The sponsors get access to all the IP/research. As MIT sees it – the research may be considered too costly or too “far out” to be accommodated within a corporate environment. It is also an opportunity for corporations to bring their business challenges and concerns to the Lab to see the solutions our researchers present. When I talked to Rana about it, she mentioned she liked o see her work filter out commercially – so it becomes part of peoples everyday. Its also given her a lot of great opportunities to work with some amazing minds.

A few weeks ago, at my artist talk at the Banff New Media Institute Liminal Screen residency, I talked about the fact that I was working with MIT. A few of the artists at my talk had a few issues with it. A basic distrust. A sense of what you are creating could be taken out of your hands and implemented in ways that you may not have envisioned. I would like to spend more time talking to some of the artists here and what they think of it.

So all the representatives of major companies are here. BT, Microsoft, Intel, Nokia, Toyota, Motorola, Maya. I see many many badges walking about with names of big companies on them – all the software companies, etc.. Mostly all men. Mostly all have their heads buried into laptops.

At the moment I am watching a technological magic show by Seth Raphael – an alumni of Ros Picards affective computing group. This is followed up by a talk from James Randi, known as the The Amazing Randi – a magician and scientific skeptic best known as a challenger of paranormal claims and pseudoscience. He writes about the paranormal, skepticism, and the history of magic. In the afternoon there will be tours of the lab and also talks about different projects.

Originally I was going to present some of the work of the Chameleon Project.

post The propagation of emotions – first prototype.

April 2nd, 2008

Last night Evan Raskob, the wonderful programmer/artist I am working on with the Chameleon Project sent me the first stage of emotional contagion propagation piece.

stage041.jpg

mock up of emotional contagion piece of how it could work in exhibtion space.

propagation.tiff

screen capture of how piece is working now – all screen are designated to one screen so we can work through how the algorithms are actually working or not working.

Viewed the first rough last night. His written up using processing. Fantastic, as it allows me to view it over the web – and when we get it fine tuned a bit – it will allow me to show the project to the other collaborators over the web so we can all get a sense of what is going on.

Aesthetically and algorithmically it’s a first stage – sort of interesting. A beginning – the first thing that comes to me is that the movement of the faces make me focus more on the changing shape of the background rather than the facial expression. Secondly, I find myself reading the personalities of the people also via the clothes they are wearing. Thirdly. I am not getting any sense of emotional contagion – I am just seeing the faces emote in a sense that I can’t understand. Maybe its too fast…There doesn’t seem to be a rhythm. How do we make it more explicit?

I think the way that we have to work with it is looking a pacing and weighting of certain personalities. Some overdrive others. Also weightings and pace of each emotional response. We are going on meet Thursday at 11am in London to discuss the next stage of the project.

Evan says that closer analysis of how the people effect each other is necessary, because its so complicated. He knows they are spreading emotions to one another, but we’d have to play with this a bit to get some optimal values. because there are so many variables involved, we’d have to look at how to add some controls for changing the percentages (each emotional state has a list of possible emotions it can go to, with percentages… there would be about 50 sliders in all if we simple added controls for all).

In Banff, at the Liminal Screen residency at the Banff New Media Institue – I was asked a question – what if you can’t map emotional contagion. Good question – what if we can’t? What if we are making it all up – maybe it is too complex. Maybe we have to relook at how I am thinking about these emotional algorithms. Anyway, all will be clear on Thursday when we meet.

post meeting with Rana at the MIT Media Lab

April 2nd, 2008

Filed under: CHAMELEON PROJECT — Tina @ 1:50 am

I have been in Cambridge since Saturday. I am here as part of the Synapse Residency awarded by the Australian Network for Art and Technology. I am here to meet/work with Rana El Kaliouby of the Affective Computing Group at the MIT MEDIALAB to discuss the project. I was here last month for five days. I gave the a talk to the group, and also a met a few of the researchers at MIT. This time, its another quick visit a mainly to just talk about whats been done on the Chameleon Project, and what we think of it. What is next etc.

Rana has developed the real time emotion recognition technology that we will be using with the Chameleon Project. I showed the imagery/video/photography developed over the Banff New Media Institute Liminal Screen Residency. Also I showed her a few potential mockups of how its going to be used. There is some good things there, a lot to work with but interestingly, I already see the gaps in what I didn’t shoot… I guess one of the major things now is testing how it works with the emotion recognition technology – so what sort of expressions will the installation emote? When I showed Rana the collective footage of all people falling into anger, she got really anxious and said she felt a physical arousal in her body. She wanted me to stop showing her the imagery. Good response – quite intense – but what sort of emotional expression does viewing the footage result in? – she felt it in her body? But did anything change in her expression. _ its this that we need to know more about. This is the thoughts I walked away with after our meeting.

I need to get a copy of the software – get it working on bootcamp / working with processing.

We talked about how we are going to make this happen – hiring phd student to work on the project – make it work in darker light – make it work more spontaneously – but realistically, I won’t be testing/integrating the facial emotion recognition software until june or so. We need to see how it works now before I talk about what is next. She mentioned she has some students working on it now to get rid of a few bugs, make it easier for others to work with the software.

« Previous Page
ruldrurd
   Footer Anatlogo Arts Sa
Australian Government The Visual Arts Strategy