rulururu

post evaluating the emotional algorithms

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 6:38 pm

We are working with Nadia Berthouze and Matt Iacibini from Univerisity College of London’s Human Computer Interaction Center to evaluate the emotional algorithms hypothesized by Chris Frith. With Chris Frith, I am trying to understand how people socialise with each other. If I express one emotion, what would the most likely expression response be? What is the state where people empathize?

Chris Frith discusses the emotional algorithms : 

“We know that people tend to covertly (and unconsciously) mirror the actions of others and this also applies to facial expressions.  Observing a happy expression elicits happiness, fear elicits fear and disgust elicits disgust. (I can give references for all these claims if you need them.)
However, this mirroring is not a simply copying of the motor behaviour we observe. For example, we mirror the eye gaze of others, but what we copy is not the action, but the goal. That is we look at the same place as the person we are observing. We want know what they are looking at. This will usually involve a very different eye movement since we will have a different line of sight. We therefore need to consider the function of the behaviour of the observer. Seeing a fearful face is a sign that there is something to be afraid of, so that our fearful response is appropriate. Unless of course, the person is afraid of us, in which case a different response would be appropriate. An example of the function of these exchanges of expression is the case of embarrassment to diffuse anger.
A person commits a social faux pas. This elicits and expression of surprise and then anger. The person then displays embarrassment. This elicits compassion (for the distress of the person) in the observer. This expression of compassion indicates that the person is forgiven and every one is happy again. 
I used these ideas to make a best guess about the parameters for the emotional algorithms”.

conversation about the emotional algorithms with Matt, Nadia and myself

….There is a lot of work on the so-called mirror system where people can show that if you see a fearful face, your reaction is fear, if you see a disgusted face, your reaction is disgusted and that, I think, is about as far is goes. The extreme mirror people say you just imitate what you see…but.. .the more sensible people say its all a matter of evolutionary advantage. So, if you see a fearful face you should probably be afraid, so there is an advantage. But if you see an angry face, it depends on where the anger is directed. And there is certainly the idea that you should look embarrassed to try and diffuse the anger.

 

evaluation meeting with Chris Frith, Matt Iacibani, Nadia Berthouze and Tina Gonsalves

We have begun designing an evaluation experiment which Matt will be running to see if the algorithms that Chris hypothesizes are ‘true’. This will become Matt’s masters thesis.

 

First series of experiments – Wizard of Oz

Protocol

1. Participant walks into room

2. Participant is briefed and consent form is signed

3. Participant is seated in front of webcam

4. The rater takes position in front of a keyboard, somewhere out of sight from the participant

5. Video of a person expressing emotion is started. This beginning clip is picked at random. 

6.  The computer then selects another video of one actor (ranging from 2-30 seconds) acting out each emotional state : neutral, sad, angry, disgust, happy, surprised. There is a selection of five videos each for each emotion  (30 videos). Different videos are shown according to the emotional algorithm developed by Chris Frith and the input from the rater, as described in the “Algorithm as it is now” section below

7. It is imagined that the participant should respond to watching the videos with an emotional facial expression.

8.  The rater will see the participant’s facial emotion response through the live webcam feed, but will not see the video being shown to the participant

 

 

Chris says: I think the rater needs to be given a signal each time a new video clip begins so that she can give a rating for the response of the participant to each clip. It will be useful for her to know when a change in expression is expected, even though she doesn’t know what is eliciting the change in expression. The rater will need to get an idea of roughly how long to wait after each signal before making her rating. This depends upon how quickly participants typically change their expression.

 

9. The rater will immediately press an ‘emotion’ button of  ’neutral, sad, angry, disgust, happy, surprised’  on the keyboard which will correspond to the facial emotion expression that participant is showing.

10. After 10 minutes the experiment is interrupted

11. Five of the participants will get a 15 min interview

12. All participants are asked the multiple choice questions – 5 min

Algorithm as it is now:

 

 

Chris’s algorithm is called to decide which emotion to switch to play next to the participant.  If the participants don’t react and all the expressions of one emotion are played

Recorded data

Video of participant, video shown to participant (including indices of exact emotion expression frame), emotions selected by rater, reactions of the algorithm, all in sync.

 

2 Comments

No comments yet.

Sorry, the comment form is closed at this time.

ruldrurd
   Footer Anatlogo Arts Sa
Australian Government The Visual Arts Strategy