Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Thursday, July 04, 2019

Building trust in artificial intelligence | Machine Learning & AI - Phys.Org

From telecommunications to road traffic, from healthcare to the workplace—digital technology is now an intrinsic part of almost every area of life by University of Bonn.

Photo: CC0 Public Domain
Yet how can we ensure that developments in this field, especially those that rely on artificial intelligence (AI), meet all our ethical, legal and technological concerns? In a project led by the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, and with the participation of Germany's Federal Office for Information Security (BSI), an interdisciplinary team of scientists from the Universities of Bonn and Cologne are drawing up an inspection catalog for the certification of AI applications. They have now published a white paper presenting the philosophical, ethical, legal and technological issues involved.

Artificial intelligence is changing our society, our economy and our everyday lives in fundamental ways. And in doing so, it is creating some exciting opportunities in how we live and work together. For example, it already helps doctors to better evaluate x-rays, which often leads to a more accurate diagnosis. It is the basis of chatbots that provide helpful answers to people looking for advice on, for example, insurance. And, before too long, it will be enabling cars to become more and more autonomous. Current forecasts indicate that the number of AI applications is set to increase exponentially over the coming years. McKinsey, for example, projects additional global growth from AI of up to 13 billion U.S. dollars by 2030.

At the same time, it is clear that we need to ensure that our use of AI and the opportunities it brings remains in harmony with the views and values of our society...

The certification process will revolve around questions such as: Does the AI application respect the laws and values of society? Does the user retain full and effective autonomy over the application? Does the application treat all participants in a fair manner? Does the application function and make decisions in a way that are transparent and comprehensible? Is the application reliable and robust? Is it secure against attacks, accidents and errors? Does the application protect the private realm and other sensitive information?
Read more...  

Additional resources
White paper: www.iais.fraunhofer.de/ki-zertifizierung

Source: Phys.Org