rulururu

post jan 09 update

January 30th, 2009

Filed under: CHAMELEON PROJECT — Tina @ 5:09 am

Its been a year since I started the ANAT residency. I am still working on the production of the Chameleon Project – which has been primarily funded by the Wellcome Trust in the UK, and also the australia arts council visual and new media art boards and Arts Council England. I am with my husband, Matt and my son, Pablo, currently in Berlin. Its snowing out there, which is a shock after spending a few months in the tropics of Australia. Pablo is now 15 months old, crawling about and most of all enjoying pulling power cords out of hard drives and throwing about cd’s and dvd’s.

I am here working with Jeff Mann, who is attempting to integrate the mind reading technology developed by MIT media lab into the video engine. Its been slow progress to get it all working together and triggering HD footage. We are showing it next week at the Dana Center – science museum in London, and so far, I can’t say I am feeling too confident. Everything feels too delicate. We seem to becoming up with more problems than results. But that is part of doing these projects. I am also doing an initial artist talk with Hugo Critchley and Helen Sloan at Lighthouse next week as well.

Every decision requires so much testing from an aesthetic, engagement, HCI, scientific and technological point of view – its slow going. Much testing and much collaboration. My main job seems to be asking the right questions to everyone, and trying to talk in a language that everyone understands.

We are heading to Brighton in March to do a residency/commission with Lighthouse. We hope by that stage to have the project working with multiple interaction. In June we are off to Boston to MIT and then most likely back to Banff to work on the second stage of the visual database.

Anyway, a small catch up.

 

 

post update of chameleon project

November 10th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 12:46 pm

The Chameleon Project is a two year project, with the final prototype being finalised in Dec 2009. The progress of the work will be updated on my website http://tinagonsalves.com/chamselectframe02.htm

post getting ready to leave

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 8:44 pm

We are now packing up the flat/ studio and equipment in London to head back to Port Douglas in Australia. The initial idea was to stop off and present at ISEA 2008 in Singapore, but it all got too expensive.

It has been a huge huge year, and the most wonderful time for research and life experience. The Chameleon project has catalysed into an exciting project, the collaborative partners seem to have been consolidated, and I have extended the project by bringing in new partners. Its been an incredible rewarding experience, one with implications that are hard to put into words. I find you need some time to realise the extent of how these residencies affect your life, your work, your future projects.

Late last year I was awarded the Wellcome Trust Large Art Award to build Chameleon. This was great news, as it meant, while undertaking the Synapse Residency, I could not only continue R and D, but move into production – hiring programmers, buying equipment. The timing was fantastic.

The Synapse Residency began mid January. I flew to london with my partner, Matthew Wild, and my son Pablo Wild from Cairns in North Australia.  He was less three months old when we left. In order to do the residency, Matthew took a few months out from working in order to help with Pablo. Matthew is a chef. I have traveled with my work for many years, but as a new mother, I wasn’t sure how it was all going to work, how Pablo would adjust to the travel, and how I would manage to be able to continue to work. I know I was going to enjoy Matt cooking for me though. I hoped Matt wouldn’t get bored having to take so much time off in order for me to work.

In London, I continued my role as Honorary Artist in Resident at the Wellcome Institute of Neuroimaging at UCL London, specifically working with social neuroscientist Chris Frith who is developing emotional algorithms to be used in the Chameleon Project. I also traveled to Brighton to begin my role as Artist in Resident at the Brighton and Sussex Medical School, in particular working with long term collaborator, emotion neuroscientist Hugo Critchley. The flat in London was based only a couple of minutes walk from the institute of Neurology in London. We launched the project at the Dana Centre at the Science Museum, and I negotiated for the Science Museum to become a venue where all the collaborators can come together and discuss the developments of the project.

In February I travelled to Helsinki to present current work to Nokia, and try to establish Nokia as a partner in a future project that I am developing with Andrew Brown and Christian Jones in Australia. The project analyses prosody over mobile phone networks to drive new channels of communication. We are hoping to begin it in late 2009.

I  traveled to Dublin and Barcelona to meet with gallery spaces.

In February, I traveled to New York City to meet with a few galleries, and then up to Cambridge in the USA to work with the MIT Medialab to develop the mind reading technology used in the “Chameleon” project. I then flew to the Banff New Media Institute for a month to develop initial imagery for the project. I spent about eight days in the studio working with 3 HD video cameras. I enticed the other visiting artists to take part in the project. We ended up with some great footage, great performances. It took a while to understand how to direct everyone to elicit the best performances. Pablo managed the freezing temperatures, and his stroller getting stuck in the snow. He continued to sleep well and we also dragged him into the studio. I negotiated Banff to become a supporter of the project, and we are planning for a final residency in 2009.

We then went back to Cambridge in the USA to discuss development, and tour the media lab as it was sponsor week. We talked about how to exhibit the project for next years sponsor week. I met a lot of researchers.

We moved back to London to continue working with Frith and Critchley. I spent much time with Chris, and the programmer I was working with at the time, Evan Raskob, to develop the emotional algorithms of the work. These algorithms became a template for how people socialise together. I met with Hugo to mostly discuss recent experiments and research. I designed a front cover for Neuron magazine, and Hugo had written up an experiment that I had been involved with the year before. I had created a few sets of visceral video databases, which had been used to map disgust.

I flew to berlin to visit the Berlin Biennial, and back to Finland to further present to the Nokia Research Labs to negotiate support for my next project. This is now being finalised.

I organised a dinner at my place for all of the researchers on the “Chameleon Project” – Rosalind Picard, Helen Sloan, Chris Frith and Hugo Critchley. It was great for them all to finally meet each other. We discussed current versions of the project. We organised an exhibition of the work at the ICA in London while all the researchers were in town.

I extended the research group to include Nadia Berthouse, a researcher exploring emotions and human computer interaction. The project became part of her MsC program. I wanted the emotional algorithms that Chris Frith hypothesized evaluated. I extended the time in London to do this. We are currently carrying out the experiments at the Computer Science Department at UCL. I also met with Caravansai Studio – an acting group, to explore if we should use actors to great the content.

We have built six stages of the Chameleon Project since the residency started, which has lead to an established project where we can start seeing some great results. Its been an incredibly cross disciplinary project. Getting the timings right has been essential, and as much as I am creating new work, and building creative content, I am driving the research group, making sure that everyone is understanding each other, making everyone is aware what is going on, making sure that the timings for everyone work. The mind reading tech, the emotional algorithms, the video engine, the  visual content, the evaluation, and the space to exhibit them in. Its been a lot of work. But I have learnt a lot.

I was awarded Arts Council England funding from July to October.

While working on Chameleon, I became inspired to further conceptualize another project which could begin in late 2009 which looks at chat engines and texting. I took the opportunity of being in London to consolidate another research group of Jonathon Ginzburg (computer linguist – kings college), Pat Healey (computer Scientist – Queen Mary) Chris Frith, Nadia, Hugo and Helen Sloan. The project is called DIVULGE, a range of experimental mobile and internet based tools for the investigation of human interaction.The work will look to neuroscientific paradigms to experimentally manipulate mediated communication in real time. We haven’t found funding for it yet, but will relook at it in October. Everyone seems quite excited about its potential. It would take a few years to develop. But one of the hardest stages are getting the right people together. That takes years, and I first began the meetings for this in 2005. First started thinking about the idea at the IAMAS residency in Japan in 2004. I am glad that this project may be heading somewhere finally. 

I flew to Vienna in early July to present the chameleon project to Roy Ascott’s consciousness reframed conference. The paper is getting published. I met with a lot of researchers. I then flew to dublin to further meet with gallery spaces. I am interested in the Science Gallery – a new space opened at Trinity. They seem to back the project.

So now back in London, and packing boxes, and getting the family ready to leave. Pablo is a joy, and now nine months old. Babbling incessantly. But there is a lot of ma ma ma in there and da da da.  He has been a dream, and all the exposure to all these different parts of the world has created a really content personality. He smiles a lot, rarely cries and loves people. I thinks its been great for him. He is now on his 35th flight, and over the last 6 months we have visited about 20 countries. Matt, my partner, took the opportunity of being in London to do ‘chef in residences’ at some of London’s best restaurants, touring all the michelin starred venues, getting insight to how the kitchens run, etc.

Last week, to top it off, the Inter Arts board awarded my a self initiated residency award which this ANAT synapse residency was a template for.

On Tuesday, when we step on the flight back to Australia, I will take a deep breath, and hopefully a long sleep (if pablo sleeps). Its been a really busy and exciting time. The ANAT Synapse experience is one that take years before you can verbalise the extent of how these opportunities become part of who you are. Having the backing of ANAT has paved the way for a lot of other funding opportunities, institutions and residency opportunities, to come on board and support the work.

post collaborating with Pixy?

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 7:21 pm

pixy1
When I was in Vienna a few weeks ago, presenting at Roy Ascott’s consciousness reframed conference, I had an interesting meeting with Natacha Roussel.  i met her and michel during the residency in Banff New Media Institute, in March, where they were working on the pixy project. I was in Banff as part of the Synapse Residency.

Michel says: Pixy came out of the need to overcome already existing video displays. It displays low resolution image one can manipulate and physically distort; each pixel of the image is an autonomous physical element made out of electroluminescent paper, it can be moved. It can be placed on a volume and become an object.. Pixy overcomes video image itself by transforming each pixel as an object and video in a volume, low resolution image becomes an immersive experience.

When they showed their work in Banff, and I automatically thought about how to work with these artists, as the space would work well with the screen they have created. They work with new ways of screen to work with architecture – and I talked to them about developing a screen that is multi-dimensional –

you can look at it from four ways, and also walk through it.

there is some documentation here but the project has gone way beyond what you see here. It breaks down the pixels (they can do 1200×1600 pixels now using small electroluminescent squares. Each square corresponds to a video pixel therefore defining the image’

s resolution and the aesthetic of the whole. It reproduces a low resolution animated image in monochromatic tones).

Interestingly I don’t think the bad resolution would affect the reading of the piece. Neuroscientific research has done tests on how much information you can take from an image of a face, and you can still read its emotional tone of the face.

I proposed to them to create a screen that hangs from the middle of the room. You can see four different faces –

one to the north. south, east, west. The face responds to your emotional expression and talks to you. It would be a great collaboration, and interesting for the space. It would probably require a grant to be written to get the artists to develop their screen a bit more and to install it.

It would be great for the work, as you can walk through the screen, into the ‘mind’ of the work, if you like. We are working out how to explore/start this collaboration.

post meetings with galleries

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 7:04 pm

With Helen Sloan, we have been meeting with a few galleries that are interested in the project.

Helen is now working on an exhibition strategy.

I flew to dublin a couple of weeks ago to meet with project space and the new Science Gallery. I think the Science Gallery, based at Trinity college would be great for the piece. They mentioned coming over for an initial talk in December.

I met with the Lighthouse gallery in Brighton who are interested in supporting on of the process stages. I am looking at March 2008.  

Also in contact with the Natural History Museum in London who are interested in making it part of an exhibition about Darwin’s emotion theories.

I also met with the Millais Gallery who are interested in the final work.

Sketch in London are looking how to schedule it.

The ICA in london have scheduled to exhibit in the digital studio in December.

The Science Museum (dana center) are looking to schedule another event in November/December. 

it would be great to get the project to Australia, but so far no australian funding has been achieved (except residency grants).

 

 

 

post working with Darren Tofts

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 7:00 pm

Darren tofts has been writing about the Chameleon project. We are building a selection of writings for a catalogue to be released with the final exhibition. 

post developing mind reading technology

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 6:56 pm

With Youseff Kashid and Rana El Kaliouby based at the MIT Media Lab, we need to develop the mind reading technology for the Chameleon project. The mind reading technology will analyse the facial emotion expression of the audience, using this to drive the ‘emotional video engine’ I am creating with Chris Frith, the neuroscientist.

We are throwing many questions back and forth to understand the best way swe should move ahead with the development. Generally, these emails are cc’d to get feedback from the collaborative group. 

1. We are working out how much processing power the application would take, as I am at the stage that we need to start buying computers to run the project. I want the mind reading tech and the video engine to work on the same computer. I want the video engine to trigger HD footage (3 channels).

2. We need to work out pacing – what is the ‘timing’ that the camera should let the computer know about the expression. Chris Frith says that the most ‘potent’ expression will happen about two seconds after being shown a video. Chris says ” If we believe in Ekman’s micro-expressions, which may affect us even if we are not aware of them, then we need to sample fairly frequently. e.g. 15 times a second. However, if we want a more stable expression for the clips then we might want to get rid of these very sort changes and sample less frequently”.

3. The mind reading technology works on a PC computer – so how will this work ? We would have to use bootcamp to run it. How stable is this?

4. We need to work with the following basic emotions - neutral, sad, angry, disgust, happy, surprised. We are using the videos shot in Banff to train the program to recognise these emotions.

5. For the interaction design I need to start knowing the distance of when faces can be read. How far can you be from the camera. How much can you look to the side and it still reads the expression? What happens when it doesn’t recognize an emotion? How long does it take to latch on to a face and processing the information?  yousseff says : I’m using a simple webcam. I think it can do a lot better under better lighting conditions. I did a measure and detected my face from a 2 metre distance.

I remember it being a bit better though. But there’s another thing. If I would start close up and move backwards it would be able to follow me, even if I go beyond 3 metres. It’s how the tracking algorithm is built. The tracker first tries to find your face in the area it last saw you, if it can’t find a face, it will do a sweep of it’s whole view from scratch. This makes the tracking process faster.

As for the turn angle it can follow you for about 60 degrees.

The range of the turn angle and distance get better with a better camera, better lighting, plain coloured backgrounds is very effective and not too fast motions (but that’s not a real issue. if it looses the person, it will do a sweep to find them again).

5. How dark can it be for the camera to pick up the expressions? Youseff says : “I was first thinking of a spotlight in the are of the face, but the infrared LED might be better, will let the camera see more without disturbing the lighting of the room. Night Vision might pick up to many colour patterns than we want. I don’t think it will matter much what the person is wearing. Unless they’re wearing something with a picture of face on it, the tracker might start pickng this up instead of the persons face”.

6. Do you imagine that it will give out the subtleties of the emotions? For example if we rate disgust at 1-5 – (5 being the strongest) do you think the application can let us know if its a strong disgust – (5) or a weak disgust (1) and that will choose a stronger emotional video response or a weaker one. I have tended to create five video tracks for each emotion – ranging from strong to weak. youseff says : Yes that’s totally possible and easy to implement. Mindreader gives you a probability for each gestures ranging from 0 to 1 inclusive. We could quantize intensity stages.

post people can quite easily ‘read’ the average expression in a group of people.

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 6:41 pm
There is a recent publication showing that people can quite easily ‘read’ the average expression in a group of people.
Current Biology, Vol 17, R751-R753, 04 September 2007
Rapid extraction of mean emotion and gender from sets of faces
Jason Haberman1,2 and David Whitney1,2
1 The Center for Mind and Brain, The University of California, Davis, California 95618, USA
2 The Department of Psychology, The University of California, Davis, California 95618, USA
Summary
We frequently encounter crowds of faces. Here we report that, when presented with a group of faces, observers quickly and automatically extract information about the mean emotion in the group. This occurs even when observers cannot report anything about the individual identities that comprise the group. The results reveal an efficient and powerful mechanism that allows the visual system to extract summary statistics from a broad range of visual stimuli, including faces.
Chris

post evaluating the emotional algorithms

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 6:38 pm

We are working with Nadia Berthouze and Matt Iacibini from Univerisity College of London’s Human Computer Interaction Center to evaluate the emotional algorithms hypothesized by Chris Frith. With Chris Frith, I am trying to understand how people socialise with each other. If I express one emotion, what would the most likely expression response be? What is the state where people empathize?

Chris Frith discusses the emotional algorithms : 

“We know that people tend to covertly (and unconsciously) mirror the actions of others and this also applies to facial expressions.  Observing a happy expression elicits happiness, fear elicits fear and disgust elicits disgust. (I can give references for all these claims if you need them.)
However, this mirroring is not a simply copying of the motor behaviour we observe. For example, we mirror the eye gaze of others, but what we copy is not the action, but the goal. That is we look at the same place as the person we are observing. We want know what they are looking at. This will usually involve a very different eye movement since we will have a different line of sight. We therefore need to consider the function of the behaviour of the observer. Seeing a fearful face is a sign that there is something to be afraid of, so that our fearful response is appropriate. Unless of course, the person is afraid of us, in which case a different response would be appropriate. An example of the function of these exchanges of expression is the case of embarrassment to diffuse anger.
A person commits a social faux pas. This elicits and expression of surprise and then anger. The person then displays embarrassment. This elicits compassion (for the distress of the person) in the observer. This expression of compassion indicates that the person is forgiven and every one is happy again. 
I used these ideas to make a best guess about the parameters for the emotional algorithms”.

conversation about the emotional algorithms with Matt, Nadia and myself

….There is a lot of work on the so-called mirror system where people can show that if you see a fearful face, your reaction is fear, if you see a disgusted face, your reaction is disgusted and that, I think, is about as far is goes. The extreme mirror people say you just imitate what you see…but.. .the more sensible people say its all a matter of evolutionary advantage. So, if you see a fearful face you should probably be afraid, so there is an advantage. But if you see an angry face, it depends on where the anger is directed. And there is certainly the idea that you should look embarrassed to try and diffuse the anger.

 

evaluation meeting with Chris Frith, Matt Iacibani, Nadia Berthouze and Tina Gonsalves

We have begun designing an evaluation experiment which Matt will be running to see if the algorithms that Chris hypothesizes are ‘true’. This will become Matt’s masters thesis.

 

First series of experiments – Wizard of Oz

Protocol

1. Participant walks into room

2. Participant is briefed and consent form is signed

3. Participant is seated in front of webcam

4. The rater takes position in front of a keyboard, somewhere out of sight from the participant

5. Video of a person expressing emotion is started. This beginning clip is picked at random. 

6.  The computer then selects another video of one actor (ranging from 2-30 seconds) acting out each emotional state : neutral, sad, angry, disgust, happy, surprised. There is a selection of five videos each for each emotion  (30 videos). Different videos are shown according to the emotional algorithm developed by Chris Frith and the input from the rater, as described in the “Algorithm as it is now” section below

7. It is imagined that the participant should respond to watching the videos with an emotional facial expression.

8.  The rater will see the participant’s facial emotion response through the live webcam feed, but will not see the video being shown to the participant

 

 

Chris says: I think the rater needs to be given a signal each time a new video clip begins so that she can give a rating for the response of the participant to each clip. It will be useful for her to know when a change in expression is expected, even though she doesn’t know what is eliciting the change in expression. The rater will need to get an idea of roughly how long to wait after each signal before making her rating. This depends upon how quickly participants typically change their expression.

 

9. The rater will immediately press an ‘emotion’ button of  ’neutral, sad, angry, disgust, happy, surprised’  on the keyboard which will correspond to the facial emotion expression that participant is showing.

10. After 10 minutes the experiment is interrupted

11. Five of the participants will get a 15 min interview

12. All participants are asked the multiple choice questions – 5 min

Algorithm as it is now:

 

 

Chris’s algorithm is called to decide which emotion to switch to play next to the participant.  If the participants don’t react and all the expressions of one emotion are played

Recorded data

Video of participant, video shown to participant (including indices of exact emotion expression frame), emotions selected by rater, reactions of the algorithm, all in sync.

 

post working with solent university to explore Rapid prototyping opportunities

July 18th, 2008

Filed under: CHAMELEON PROJECT — Tina @ 6:23 pm

 

Met with Alan Scrase and Eric Miller at Solent University in Southampton. They have a fantastic rapid prototyping lab.

A part of this project was exploring a more sculptural installation. back projections onto anthropomorphized types of shapes. Alan talked about some sort of smart material that could be interesting. We discussed different techniques of creating moulds and then creating sort of templates. It was amazing how large they could be. need to start sketching out a few ideas. 

sketch of back projected sphere like objects. people could walk through the projections creating shadows on the discs.

Shape of the disks. If we used ‘human like shapes’ we would have to track the video, etc. I think its better not to. How large can the discs be? can people touch them? What sort of texture should it be? How do we find a surface that doesn’t show the light of the back projection?

 

. How large can the surface be? 

. How heavy would this be? 

. What sort of surfaces would you recommend? 

4. Could the surfaces by textural – or sort of surface that could make one want to touch it? 

Something that looks much like skin? Something with a little elasticity? 

5. How much would this cost? 

6. How much time would it take? 

7. How would you prepare files? 

8. When would be a good time to test 

Next Page »
ruldrurd
   Footer Anatlogo Arts Sa
Australian Government The Visual Arts Strategy