rulururu

post developing mind reading technology

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 6:56 pm

With Youseff Kashid and Rana El Kaliouby based at the MIT Media Lab, we need to develop the mind reading technology for the Chameleon project. The mind reading technology will analyse the facial emotion expression of the audience, using this to drive the ‘emotional video engine’ I am creating with Chris Frith, the neuroscientist.

We are throwing many questions back and forth to understand the best way swe should move ahead with the development. Generally, these emails are cc’d to get feedback from the collaborative group. 

1. We are working out how much processing power the application would take, as I am at the stage that we need to start buying computers to run the project. I want the mind reading tech and the video engine to work on the same computer. I want the video engine to trigger HD footage (3 channels).

2. We need to work out pacing – what is the ‘timing’ that the camera should let the computer know about the expression. Chris Frith says that the most ‘potent’ expression will happen about two seconds after being shown a video. Chris says ” If we believe in Ekman’s micro-expressions, which may affect us even if we are not aware of them, then we need to sample fairly frequently. e.g. 15 times a second. However, if we want a more stable expression for the clips then we might want to get rid of these very sort changes and sample less frequently”.

3. The mind reading technology works on a PC computer – so how will this work ? We would have to use bootcamp to run it. How stable is this?

4. We need to work with the following basic emotions - neutral, sad, angry, disgust, happy, surprised. We are using the videos shot in Banff to train the program to recognise these emotions.

5. For the interaction design I need to start knowing the distance of when faces can be read. How far can you be from the camera. How much can you look to the side and it still reads the expression? What happens when it doesn’t recognize an emotion? How long does it take to latch on to a face and processing the information?  yousseff says : I’m using a simple webcam. I think it can do a lot better under better lighting conditions. I did a measure and detected my face from a 2 metre distance.

I remember it being a bit better though. But there’s another thing. If I would start close up and move backwards it would be able to follow me, even if I go beyond 3 metres. It’s how the tracking algorithm is built. The tracker first tries to find your face in the area it last saw you, if it can’t find a face, it will do a sweep of it’s whole view from scratch. This makes the tracking process faster.

As for the turn angle it can follow you for about 60 degrees.

The range of the turn angle and distance get better with a better camera, better lighting, plain coloured backgrounds is very effective and not too fast motions (but that’s not a real issue. if it looses the person, it will do a sweep to find them again).

5. How dark can it be for the camera to pick up the expressions? Youseff says : “I was first thinking of a spotlight in the are of the face, but the infrared LED might be better, will let the camera see more without disturbing the lighting of the room. Night Vision might pick up to many colour patterns than we want. I don’t think it will matter much what the person is wearing. Unless they’re wearing something with a picture of face on it, the tracker might start pickng this up instead of the persons face”.

6. Do you imagine that it will give out the subtleties of the emotions? For example if we rate disgust at 1-5 – (5 being the strongest) do you think the application can let us know if its a strong disgust – (5) or a weak disgust (1) and that will choose a stronger emotional video response or a weaker one. I have tended to create five video tracks for each emotion – ranging from strong to weak. youseff says : Yes that’s totally possible and easy to implement. Mindreader gives you a probability for each gestures ranging from 0 to 1 inclusive. We could quantize intensity stages.

57 Comments

No comments yet.

Sorry, the comment form is closed at this time.

ruldrurd
   Footer Anatlogo Arts Sa
Australian Government The Visual Arts Strategy