post meeting in London with ros picard, rana el kaliouby, chris frith, hugo critchley, helen sloan

May 26th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 5:43 pm

I organised a dinner at my place last week as Rana El Kaliouby and Ros Picard from the Affective Computing Group at the MIT Medialab were in London. Ros and Rana are developing the real time facial emotion expression software that I am using in my project, Chameleon, a video installation exploring emotional contagion. Chris Frith is a social neuroscientist from the Wellcome Department of Neuroimaging who is helping develop the emotional algorithms used to drive the video. Hugo Critchley from the Brighton and Sussex Medical School is advising on the visual changes that will happen to the work dependent on the emotional state of the audience. Helen Sloan is curating the work.

Intro Summary of Chameleon Meeting 16/5/08

Ros Picard showed her system measuring brain activity through a GSRsensor. The potential uses for this were discussed as well as trends in Ros’s own response in relation to those of colleagues. Ros took a few versions over, but they aren’t coping so well with all the travel.

Rana El Kalioby demonstrated her system which responds in real time to facial expression. The software analyses the expression and ‘reading the mind’. It is the older version of the software – she is developing it that other researchers can modify it for their own use. The software works on PC.

Tina Gonsalves showed the work that she had achieved at Banff and the Phase 1 of the project.  The strength of the images and the response of the actors was noted and many of the scientists present commented that these images were much better than those in common usage in their research. Hugo Critchley commented that this was one of the reasons he began to work with Tina.

Was there some application for the images in scienfific research and how would the interaction work with this? 

Chris Frith talked of research around suggestion eg gesture being different to signifier of facial expression. Should one facial expression also signify more than one emotional state at times? 

Should this be addressed?

From these demonstrations a number of ways of approaching interactivity in relation to the images of Chameleon were addressed:

1)    Should there be different algorithms for each character?

2)    Should there be an intelligent/adaptive component in the interactive system?

3)    Should it just be facial expression that is tracked? Hugos work could be incorporated for instance.

4)    What expressions should be videoed. Currently we have angry, neutral, surprised, happy, sad, disgust

 Tina wanted to know if the work should stay screen-based or not and referenced the work of Tony Oursler. In a subsequent discussion, Tina and Helen talked about the possibility of 3 dimensional representation of shapes that relate to particular moods using a 3 D printer.  These could be used as projection surfaces.

It was also mentioned that there was a danger of Uncanny Valley if things try to be too representational.

It was suggested that the work could be used for scientific research if people gave permission in the gallery space. Scientists may also want to hold workshops or labs in the gallery to gather some research.

No Comments

No comments yet.

Sorry, the comment form is closed at this time.

   Footer Anatlogo Arts Sa
Australian Government The Visual Arts Strategy