Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Saturday, August 01, 2015

Uncanny Valley: ‘Emotion’ in robots

Follow on Twitter as @uncannyvalley
IDG Connect catch-up with Stephanie Lay, a researcher with the Open University.  

Stephanie Lay  began her study into the uncanny valley – “the sense of eeriness and unease that accompanies the sight of something that is almost but not quite human” – in 2006.  As the use of robots becomes increasingly mainstream her work is becoming more and more relevant in our daily lives.

When do you think robots will be able to realistically convey emotion so they cease to be creepy?
I think we’re getting quite close to this now. If you compare the androids that had just been created back when I was starting my research into the uncanny valley [in 2006] with the ones that are available now, the differences in the complexity of emotions they can convey are quite striking.

For a good example, consider Repliee-Q1 from 2005:

Repliee Q1 expo Demo 1


Compared to Aiko Chihara, the robot receptionist, who was unveiled earlier this year:

Meet Aiko: Toshiba's new Android receptionist | Engadget 




My opinion is that they are both good examples of uncanny agents because they are still in that not-quite-human zone, but Repliee-Q1’s expressions strike me as eerie because her technical limitations in ability to blink, smile and gesture make them look forced and stilted. Aiko’s overall appearance is much more polished and beautiful, with more humanlike skin and hair, but I feel that throws the small timing discrepancies between her mouth and eye movements into even starker contrast as it sets up the expectation that she’s real, but a viewer would realise quickly that she is not.

Based on that rate of advancement over the last decade? I’d hope to see truly realistic emotion by 2025.

What will this take to achieve?
It would certainly require a continuation of those technical advances in making facial features appear more humanlike, creating eyes which look realistically shiny rather than flat or doll-like, soft skin that can flex and move like human skin, and maybe even the introduction of artificial flaws (spots, asymmetry or wrinkles) so the robots don’t become eerie for looking impossibly perfect.

However, my research has shown that physical appearance is only part of the story, and genuinely realistic expressions would need the software controlling emotional expressions to be carefully calibrated so that there is no mismatch between the expressions shown in the upper and lower parts of the face. For example, a change in expression where a smile takes fractionally too long to reach the eyes could result in the same type of unsettling effects that I found with images like the one below, where the happy smile is paired with frightened eyes:

My research used images of people rather than exploring responses to actual robots, so I’m thinking of taking the research findings further out into the world by using a modern robot like Aiko which had been set up to display those types of mismatched expressions. I’d want to see how comfortable people would feel approaching her, and it would also be fascinating to explore whether any other aspects of the social encounter could mediate the eeriness of the mismatched appearance.
Read more...

Source: IDG Connect, Hidden Below Channel and Engadget Channel (YouTube)