Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Friday, February 28, 2020

A New Study Finds People Prefer Robots That Explain Themselves | Technology - Smithsonian.com

This article was originally published on The Conversation. 
Read the original article. 

Engineers at UCLA explain how A.I. systems should be designed to both perform a task and win the trust of humans

UCLA researchers test a robot.jpg
UCLA researchers test a robot after it has learned how to open a medicine bottle from observing human demonstrators.
Photo: UCLA Samueli School of Engineering, CC BY-ND
Artificial intelligence is entering our lives in many ways – on our smartphones, in our homes, in our cars. These systems can help people make appointments, drive and even diagnose illnesses. But as AI systems continue to serve important and collaborative roles in people’s lives, a natural question is: Can I trust them? How do I know they will do what I expect? 

Explainable AI (XAI) is a branch of A.I. research that examines how artificial agents can be made more transparent and trustworthy to their human users. Trustworthiness is essential if robots and people are to work together. XAI seeks to develop A.I. systems that human beings find trustworthy – while also performing well to fulfill designed tasks.

At the Center for Vision, Cognition, Learning, and Autonomy at UCLA, we and our colleagues are interested in what factors make machines more trustworthy, and how well different learning algorithms enable trust. Our lab uses a type of knowledge representation – a model of the world that an A.I. uses to interpret its surroundings and make decisions – that can be more easily understood by humans. This naturally aids in explanation and transparency, thereby improving trust of human users.

In our latest research, we experimented with different ways a robot could explain its actions to a human observer...

Designing for both performance and trust
The most interesting outcome of this research is that what makes robots perform well is not the same as what makes people see them as trustworthy. The robot needed both the symbolic and haptic components to do the best job. But it was the symbolic explanation that made people trust the robot most.
 

This divergence highlights important goals for future A.I. and robotics research: to focus on pursuing both task performance and explainability. Only focusing on task performance may not lead to a robot that explains itself well. Our lab uses a hybrid model to provide both high performance and trustworthy explanations.
Read more...

Source: Smithsonian.com