rulururu

post organising stages.

March 1st, 2008

Had another meeting with Rana. Discussed stages and who supervises what parts of the project she should supervise and what we should achieve before I come back to the Media Lab on the 29th March.

stage one: Emotional Algorithms

1. working out emotional algorithms with Chris
2. reshoot database in banff
3. Contact Evan to see if the project can work online so all collaborators can view it?

stage two: Look at the propagation of emotions

1. Look at more complex algorithms and how they spread between each other.
2. reshoot this still database in banff (using still photography) (need camera?)

stage three: Multi-modal interaction?
1. test touch as a further mode of interaction. (touch will bring audience closer to the portraits – making the cameras easier to read them – could be major issues with the cameras reading the audience more spontaneously, and this could potentially resolve this problem.
2. shoot data base of pablo. As the audience touches the portrait of the baby – becomes like a live tamagochi?

stage four: integration
1. Integration of Rana’s technology into project. We will do this using max – her technology will spit out info every few frames.
2. Look at one camera – running on laptop – interacting with digital portraits.

thoughts:
how does sound work with the project. ? Talking with banff about what and when to record in banff.

Should we continue to pursue a live emotional contagion tool to really understand how social groups begin to build trust – also what are the most emotionally contagious actions?

talking a lot about engagement – scale, pace, dynamic. What is the immediate response? How do you get some one to want to engage with the work?

how is the imagery treated? – how we look at the face and read emotions changes in different emotional states?

How much can you abstract the image to amplify the emotion?

An initial video database and still database will be shot in banff.

img_0628.jpg

img_0626.jpg

img_0627.jpg

post Stanislavsky technique

February 23rd, 2008

how do you embody an emotion when a camera is pointing directly at you? – with out artifice, feeling it is real. I have tried a few techniques – shooting over extremely long sessions, shooting after extreme exhaustion and also working with clinical hypnotist – psychologist David Oakley with Hypnosis. The visuals for this piece will most likely use actors, acting out different emotional states.

I have started talking to Caravanserai – http://www.caravanseraiproductions.com/

Filip Aiello, producer for the Caravanserai Acting Studio contacted sarah Blakemore’s research group at the institute of cognitive neuroscience. They are starting a production about the human brain and how it functions in regard to emotions and instinct. They wanted to make contact with some neuroscientists – talk about mirror neurons, etc. e sre going to meet in London to discuss if the group should be involved with the project – looking at method acting – the stanislavsky technique.

post reading emotions

February 13th, 2008

In different emotional states, we direct our gaze to different parts of the face first – Using eye tracking – we look at eyebrows/brow area first – happiness we look to the mouth. Could use this to advantage – could abstract the imagery real time. When healthy people look at faces, they spend a lot of time looking at the eyes and the mouth, as shown in the figure below. People with damage to the amygdala, with agenesis of the corpus callosum, and with autism, all look at faces abnormally.

emotiontracking.jpg

using eye tracking to work out how we view a face to make sense fo the expression

studies02.jpg

reworking Paul Ekmans FACS database. Bringing particular features of the faces forward – does emotion still get read? Is it stronger? How much can ou abstract the image before the emotional expression gets lost?

post how to assess the smaller micro expressions?

February 12th, 2008

Had a chat with physiologist Harry Witchell about the project. To analyse emotional expression he is using “image pro plus” image analysis software. The process for Harry is quite analogue – he applies black dots on face and then analyses. He mentioned an interesting group at Carnegie Melon looking at facial emotion recognition. Harry believes his knowledge of understanding facial expressions is more implicit – he says he can tell you exactly what muscles to move to achieve certain facial expressions – but he believes the people who have the most understanding of the subtleties of expression are the animators. A lot of the technology, though complex can still not make sense of the smaller micro expressions that make up different emotional reactions. Constantly need to keep in mind we are working with fluid, human faces, not static computer models. the fleeting nature of emotions.

ferment01.jpg

ferment06.jpg

still from FERMENT, 3 minute video ,2006

« Previous Page
ruldrurd
   Footer Anatlogo Arts Sa
Australian Government The Visual Arts Strategy