Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Tuesday, March 27, 2018

Deep learning: Why it’s time for AI to get philosophical | Opinion - The Globe and Mail

Photo: Catherine Stinson
For years, science fiction writers have spelled out the technological marvels and doomsday scenarios that might result from artificial intelligence. Now that it’s a part of our lives, argues Catherine Stinson, postdoctoral scholar at the Rotman Institute of Philosophy, at the University of Western Ontario, and former machine-learning researcher. 


Photo: Raymond Biesinger

Those working in AI need to take their work’s social and ethical implications much more seriously.

I wrote my first lines of code in 1992, in a high school computer science class. When the words “Hello world” appeared in acid green on the tiny screen of a boxy Macintosh computer, I was hooked. I remember thinking with exhilaration, “This thing will do exactly what I tell it to do!” and, only half-ironically, “Finally, someone understands me!” For a kid in the throes of puberty, used to being told what to do by adults of dubious authority, it was freeing to interact with something that hung on my every word – and let me be completely in charge.

For a lot of coders, the feeling of empowerment you get from knowing exactly how a thing works – and having complete control over it – is what attracts them to the job. Artificial intelligence (AI) is producing some pretty nifty gadgets, from self-driving cars (in space!) to automated medical diagnoses. The product I’m most looking forward to is real-time translation of spoken language, so I’ll never again make gaffes such as telling a child I’ve just met that I’m their parent or announcing to a room full of people that I’m going to change my clothes in December.

But it’s starting to feel as though we’re losing control.

These days, most of my interactions with AI consist of shouting, “No, Siri! I said Paris, not bratwurst!” And when my computer does completely understand me, it no longer feels empowering. The targeted ads about early menopause and career counselling hit just a little too close to home, and my Fitbit seems like a creepy Santa Claus who knows when I am sleeping, knows when I’m awake and knows if I’ve been bad or good at sticking to my exercise regimen.

Algorithms tracking our every step and keystroke expose us to dangers much more serious than impulsively buying wrinkle cream. Increasingly polarized and radicalized political movements, leaked health data and the manipulation of elections using harvested Facebook profiles are among the documented outcomes of the mass deployment of AI. Something as seemingly innocent as sharing your jogging routes online can reveal military secrets. These cases are just the tip of the iceberg. Even our beloved Canadian Tire money is being repurposed as a surveillance tool for a machine-learning team.

For years, science-fiction writers have spelled out both the technological marvels and the doomsday scenarios that might result from intelligent technology that understands us perfectly and does exactly what we tell it to do. But only recently has the inevitability of tricorders, robocops and constant surveillance become obvious to the non-fan general public... 

...The current generation of AI researchers (with a few exceptions) do not have the training necessary to deal with the implications of the AI they are building. So far, the experts who do have that training are not being hired to help. That needs to change – or the darkest of science fiction will become reality. 
Read more...

Source: The Globe and Mail