rulururu

post Meeting with Evan Raskob

April 4th, 2008

We were stuck in jetlag land last night. Pablo finally slept about 2am and then we were awake till six am. Woke up for an 11am meeting with Evan Raskob, the programmer working on the Chameleon Project.

We worked through some of the algorithms, but really we need to sit down with Chris Frith, the social neuroscientist working on the project. We looked at the piece that explores the propagation of emotions. We slowed it down, played with scale and paced the propagation with it so we could see what was going on. Its a really complex set of probabilities that is hard to understand, Especially in the midst of jetlag. Its looking much better than the first version, but its pretty essential to sit down with both Chris and Evan for a few hours so we all make sure we are understanding each other.

propagation031.jpg

We discussed what was next. We need to try a version of the propagation piece that only captures the facial expression and not upper body. We need to experiment with how it looks spatially – can it work in a row? A cross? What is the best way for it to work? Can the figures just be facing each other, as if in dialogue?

We need to get the version of the two heads facing each other working in Processing.

kevinandthomas011.jpg

kevinandthomas021.jpg

kevinandthomas031.jpg

We also need to work with the two faces one and investigate ways of scouring the web to find appropriate text to transpose onto the work. We will try to the api of wefeelfine.org – the api , for example .

Returned XML samples:

The API is free under the Creative Commons Attribution-NonCommercial-Sharealike license ( http://creativecommons.org/licenses/by-nc-sa/2.5/ ).

Sites that use this API must provide attribution by including the following html on their site:

Powered by: We Feel Fine.

Finally, we discussed the integration of touch. Evan mentioned some one who is pretty switched on with dealing with haptics. It would be great to meet him next week.

post The propagation of emotions – first prototype.

April 2nd, 2008

Last night Evan Raskob, the wonderful programmer/artist I am working on with the Chameleon Project sent me the first stage of emotional contagion propagation piece.

stage041.jpg

mock up of emotional contagion piece of how it could work in exhibtion space.

propagation.tiff

screen capture of how piece is working now – all screen are designated to one screen so we can work through how the algorithms are actually working or not working.

Viewed the first rough last night. His written up using processing. Fantastic, as it allows me to view it over the web – and when we get it fine tuned a bit – it will allow me to show the project to the other collaborators over the web so we can all get a sense of what is going on.

Aesthetically and algorithmically it’s a first stage – sort of interesting. A beginning – the first thing that comes to me is that the movement of the faces make me focus more on the changing shape of the background rather than the facial expression. Secondly, I find myself reading the personalities of the people also via the clothes they are wearing. Thirdly. I am not getting any sense of emotional contagion – I am just seeing the faces emote in a sense that I can’t understand. Maybe its too fast…There doesn’t seem to be a rhythm. How do we make it more explicit?

I think the way that we have to work with it is looking a pacing and weighting of certain personalities. Some overdrive others. Also weightings and pace of each emotional response. We are going on meet Thursday at 11am in London to discuss the next stage of the project.

Evan says that closer analysis of how the people effect each other is necessary, because its so complicated. He knows they are spreading emotions to one another, but we’d have to play with this a bit to get some optimal values. because there are so many variables involved, we’d have to look at how to add some controls for changing the percentages (each emotional state has a list of possible emotions it can go to, with percentages… there would be about 50 sliders in all if we simple added controls for all).

In Banff, at the Liminal Screen residency at the Banff New Media Institue – I was asked a question – what if you can’t map emotional contagion. Good question – what if we can’t? What if we are making it all up – maybe it is too complex. Maybe we have to relook at how I am thinking about these emotional algorithms. Anyway, all will be clear on Thursday when we meet.

post planning the project

March 8th, 2008

The last few days have flown. Sitting in my studio looking out the the amazing mountains of Banff. The residency has been great. The week has been full of artist presentations.

Working on the stages of Chameleon: mapping it out in a way that might be clear to all collaborators.

1. ethnographic style film to understand emotional contagion (adam kendon pub experiment)

stage01.jpg

2. live tool to understand emotional contagion and isolate micro expressions, most contagious gestures/expressions.

stage022.jpg

3. working on emotional algorithms

stage031.jpg

4. Looking at emotional algorithms with context (live feel from web or example live feed of – wefeelfine.org)

stage03b1.jpg

5. Looking at emotional contagion and how it works in social groups – the propagation of emotions (making the algorithms of stage 3 more complex and networked)

looking at the propagation of emotions

6. Bringing in another mode of interaction into stage 5? – for example touch. If we touch the work (the act of touching is very personal) and also we know that the audience member needs to be near to the screen – which makes it easier for the emotion expression software to read the participant (not so spontaneous)

stage4b.jpg

7. Integration of real time facial expression software. Using a computer with camera embedded test how it works with stage 5 and 6.

stage051.jpg

8. Introducing multi-participants – three emotion recognition cameras/three networked screens.

stage061.jpg

9. Idea for final version, up to 20 networked screens/ 8 real time facial recognition cameras.

stage07.jpg

post Reflection of my time at MIT

March 3rd, 2008

Looking back on the week there – what do I remember? It was pretty stressful time – I got lost in a computer breakdown, which ate up my time for about two days. The tech team there was fantastic in helping me out, but it was a disappointment – I wanted to spend more time meeting people, and it was hard to do that when I was in the midst of losing so much work. I also needed to get all the technology, new computer, etc, all sorted before I left for the Banff center to the Liminal Screen Residency. At Banff, everything is hours away.

img_0603.jpg

rana el kaliouby, at the affective computing group, MIT

Otherwise, my time was spent sitting on the couch of the group – just working away and talking to whoever past me by. It felt like a pretty exciting place to be – lots of great ideas being developed. A lot of the implementations seem to suffer from lack of an aesthetic input – I think there is a need for more artists in the lab. The lab seemed a very supportive place to work, although everyone seemed to be lost in their own research, the lack of space meant people were always running into each other, talking, sharing ideas. The lab was messy, which felt good, prototypes falling all over the place. Screens and computers everywhere. Groups were constantly meeting, and there seemed to be quite a bit of cross pollination.

img_0604.jpg

img_0647.jpg

img_0648.jpg

img_0649.jpg

There were camera crews, journalists and company representatives constantly touring. Prototypes being demonstrated.

I will be coming to the lab a few times a year to work on the CHAMELEON project and it will be great to get to know the place a bit better.

post talk to the Affective Computing Group, MIT Medialab

March 1st, 2008

just gave my talk to the Media Lab – i thought I had 20 minutes but I had over an hour – so there was much room for discussion. Went well, showed past work, discussed current project. discussed engagement, reading, mapping. building stages.

we moved on to talk about the next sponsor week happening in April – brainstormed ideas – they are looking at an event with magic and physiological responses – GSR and more. Talked about creating an ambient environment.

Briefly talked about trust, empathy. They are building a range of GSR wireless sensors. making them more naturalistic, aesthetic, more robust. Could be great for the Feel_Series project.

interesting thoughts:
Talked about mirroring, but how would that happen? walk into room and all the computer transposes your own face. I guess this is similar to Alexa’s project but more complex.

Talked breifly about GSR and how the arousal stage isn’t necessarily anxiousness,but an arousal reading can also be happiness and enjoyment. Talked about how to represent this – interesting for one of my projects, Feel _Perspire. It currently uses a database of cloud footage which is triggered by the sweat of the participant. If the participant becomes aroused, storm start rolling in. It takes more of a negative approach to the arousal reading. Need to think further about the visualisation.

Rosalind Picard, the head of affective computing was interested in the prediction of emotional states – using the algorithms to generate emotional portraits – this reacts to the human – inference. talked about how it could work with autism. Roz’s group have a very big interest in creating technology that works with/helps/investigates Autism Spectrum.

The group are coming to London Mid May – will organise meeting – Setting up emotional contagion event – six cameras – six participants – for next meeting?

Also out pouring of words – based on these emotional algorithms – how does it synthesize into a conversation? gather words describing emotions and assign them instead of images? Could be interesting.

Roz mentioned that there are two places in the group coming up – thats a pretty rare opportunity.

post organising stages.

March 1st, 2008

Had another meeting with Rana. Discussed stages and who supervises what parts of the project she should supervise and what we should achieve before I come back to the Media Lab on the 29th March.

stage one: Emotional Algorithms

1. working out emotional algorithms with Chris
2. reshoot database in banff
3. Contact Evan to see if the project can work online so all collaborators can view it?

stage two: Look at the propagation of emotions

1. Look at more complex algorithms and how they spread between each other.
2. reshoot this still database in banff (using still photography) (need camera?)

stage three: Multi-modal interaction?
1. test touch as a further mode of interaction. (touch will bring audience closer to the portraits – making the cameras easier to read them – could be major issues with the cameras reading the audience more spontaneously, and this could potentially resolve this problem.
2. shoot data base of pablo. As the audience touches the portrait of the baby – becomes like a live tamagochi?

stage four: integration
1. Integration of Rana’s technology into project. We will do this using max – her technology will spit out info every few frames.
2. Look at one camera – running on laptop – interacting with digital portraits.

thoughts:
how does sound work with the project. ? Talking with banff about what and when to record in banff.

Should we continue to pursue a live emotional contagion tool to really understand how social groups begin to build trust – also what are the most emotionally contagious actions?

talking a lot about engagement – scale, pace, dynamic. What is the immediate response? How do you get some one to want to engage with the work?

how is the imagery treated? – how we look at the face and read emotions changes in different emotional states?

How much can you abstract the image to amplify the emotion?

An initial video database and still database will be shot in banff.

img_0628.jpg

img_0626.jpg

img_0627.jpg

post MIT Media Lab

February 27th, 2008

We caught the train up to Boston and now our home is in the lovely and dull holiday inn, Beacon Hill. Pablo has caught my flu, so we have been up all night with him. found myself walking the pram through the corridors of the Holiday inn last night in my pyjamas. I am exhausted. Matt went to the pharmacy to get some medication for his flu. He took the lightest one that was available and was knocked out. Scary stuff. We have been filled up with bad fast food and I am already missing access to a kitchen and some fresh food.

img_0652.jpg

matt and pablo hanging out at the holiday inn.

img_0651.jpg

pablo getting ready to face the cold

So, feeling a little bit thick in the head today _ I had my first meeting at the MIT media lab. Last year in London I met with Rosalind Picard, head of Affective computing. She was talking at the Institute of Cognitive Neuroscience. She invited me to be visiting artist at the MediaLab. Also, Over the last couple of years, I began discussing ideas with Rana El Kaliouby, one of Picards senior research fellows, which lead to my current project, CHAMELEON. The Media Lab seems like a chaotic, creative, busy congested and inspirational place. All groups seems to be together on the 4th floor – prototypes, computers, displays, mock ups are scattered over desks, floors, ceilings. A lot of good thoughts go on there.

I Met with Roz and Rana today. Rana is developing real time facial recognition software and is a key partner in the CHAMELEON project. She began this investigation through out her phd at Cambridge. She worked with Alexa Write and Alf Linney on the “Alter Ego” project in 2003. She has since further discerned the software, and is now interested in its uses with people with autism disorder.

http://www.newscientist.com/article/mg19025456.500-device-warns-you-if-youre-boring-or-irritating.html

http://www.wired.com/medtech/health/news/2006/04/70655

img_0601.jpg

Had a look at how the technology is working – was great to see it. Is quite subtle. This aim of this project is to push the technologies uses – more naturalistic interaction and spontaneous movement, working in darker lighting, working with multi-participants. She whowed me a galvanic skin conductance sensor they are developing – wireless version – could be great for Feel Series. We talked about methodologies, common language, visual ideas, timelines, and how we see ourselves working together over the next few years. Rana is actually based in Cairo, but flies into MIT once a month. We talked about how to build the stages of the project. What are the best tools to build at this stage? What are the best areas to focus on?

At the moment we have the project working with no interaction. The next stage is integration. We need to get the non interaction version, (which is simulated emotions built on a mac using max MSP) working with her software which will shoot out information every 5 seconds or so to Max MSP. I have bought another laptop, which I will run as a PC and will get it speaking to each other. So stage 2, working with Max MSP or C++. These choices have to be made pretty soon. We talked about the possibilities of hiring an undergraduate to work on the project over summer.

I am meeting up with more of the group tonight for dinner. Will be interesting to see what everyone is working on. I will be giving a talk on Friday.

img_0605.jpg

img_0607.jpg

post Emotional Contagion at the Dana Center, Science Museum

February 15th, 2008

Dana Event over – went well – was sold out. Was frustrating as I had Matt and Pablo there (I wanted Pablo to be part of the experiment, and they wouldn’t let him in because he was underage and alcohol was being served..). Poor Matt. I had primed him to document the evening with video and cameras and he ended up sitting upstairs with Pablo, not taking part in the evening.

A lot of interest from media. I started with just showing the current version of the prototype. Its pretty simple, needs more work, but it has a nice energy about it. Need to reshoot database.

img_0341.jpg

Chameleon Stage one (still of projected images – interaction using emotional algorithms)

I gave a ten minute talk, about past work, why I am here, what lead to project, where the project is at, where the project is going. Neil talked about emotional contagion, studies in empathy and mimicry, the biological mechanisms. Harry then talked for ten minutes – in an engaging, involving way – In his talk, he talked a lot about big brother (he was the medical advice on “On the Couch with Big Brother” – also a lot about how to pick deceit and lying. How to pick up on the micro expressions. All tlaks were engaging, talked about emotional contagion, social cognition from different view points. Good mix.

img_0344.jpg

Harry Witchell, Neil Harrison (with the yellow badges on)

img_0346.jpg

After the talks we then set up the live experiment which was pretty simple. We had two cameras facing each two members of the audience facing each other. We split the group with members who knew each other and members who didn’t. Evan Raskob was upstairs controlling the event –setting it up, making sure it all worked. Evan also directed the conversation – some were to talk about their mother, some were to talk about the weather. It would have been nice to read more into the imagery – I think its sort of about the pacing – two people didn’t really tell us enough about emotional contagion. I need to make a decision on whether we create a piece that I originally wanted to – working with six web cams – slowing down – out put to video. It’s a lot more investment of time and also buying cameras, cables, hubs, etc.

img_0350.jpg

img_0345.jpg

Evan Raskob

Interestingly, I found what was happening upstairs was much morei nteresting than downstairs. The set up.. the awkward nature of it, people desperately trying to make polite conversation. If we did it again, I would get a few stooges in there – to either entice them, or to act bored, or what ever. It would be good to show the piece again. Maybe in Banff.

img_0353.jpg

img_0350.jpg

It was a good beginning. Harry Witchell was great –talking about how people synchronised, how people didn’t – who was driving the conversations. Who was following. Were they interested? The subtle bits of body language. It didn’t really talk too muh about micro expressions, but probably has much to do with the quality of the image – to keep the stress off the computer, we had to make it pretty low resolution.

Helen from SCAN cancelled at the last minute – she was going to speak about science and art collaborations, etc. Was disappointing that she wasn’t there as I was the only one from the collaborative group present. Hugo is having a baby, Chris is in Denmark, Rana in Egypt. But we pulled it off, and finished the night saying that we will be back next season to show the next stage of the prototype. I will start organising timings as it would be great if everyone was close by this time.

There seemed to be great interesting in the idea of emotional contagion. A lot of questions. I talked to quite a few psychologists, psychiatrists, neuroscientists in the audience. Apparently it was in the Metro which is ashame, because I missed it. Now have to chase it up. Will ring them today.

post Evan’s photos of the Emotional Contagion Event at the Dana Center

February 15th, 2008

img_1710.jpg
Harry, Neil and myself, getting ready to talk, Dana Event

img_1702.jpg

Pablo taking part in the live event

img_1697.jpg

evan setting up upstairs.

img_1682.jpg

The first stage of Chameleon

post beginning emotional algorithms

February 13th, 2008

This is how we are starting it off : with six basic emotions and there are two people facing each other.

Emotions are:

(“neutral”, “happy”, “surprised”, “disgusted”, “angry”, “sad”)

What we need to know is what are the most probable responses to each of these.

For example

If I was Happy – main response, secondary response, tertiary response of other?

If I was Sad – main response, secondary response, tertiary response of other?

If I was neutral – main response, secondary response, tertiary response of other?

If I was disgusted – main response, secondary response, tertiary response of other?

If I was surprised– main response, secondary response, tertiary response of other?

If I was angry – main response, secondary response, tertiary response of other?

After I was angry for example, what would be my own follow up emotion?

If I displayed anger, for example? My most probable next display of emotion would be?

Next Page »
ruldrurd
   Footer Anatlogo Arts Sa
Australian Government The Visual Arts Strategy