Photo: CSA Images/iStock |
Known as M, the service was designed to rival Google Now and Apple’s Siri. A personal assistant that would answer questions in a natural way, make restaurant reservations and help with Uber bookings, M was meant to be a step forward in natural language understanding, the virtual assistant that – unlike Siri – wasn’t a dismal experience.
Fast forward a couple of years, and the general purpose personal assistant has been demoted within Facebook’s product offering. Poor M. The hope was that it would tell users jokes and act as a guide, life coach and optimisation tool.
The disappointment around M largely derives from the fact that it attempted to a new approach: instead of depending solely on AI, the service introduced a human layer – the AI was supervised by human beings. If the machine received a question or task it was incapable of answering or responding to, a human would step in. In that way, the human being would act to further train the algorithm.
This, of course, is where we are with virtual assistants; on the cusp of what may well be a transformational technology – in the form of deep learning – we’re at the peak of inflated expectations (right next to the connected home which, despite the best efforts of electronics industry at this year’s CES, appears not to be fulfilling a single human need – I’m looking at you, Cloi). To misquote Peter Thiel, we were promised general purpose artificial intelligence, and we got a home assistant that looks at the contents of our fridge and tells us to make a sandwich. What a time to be alive.
The aspirational narrative is that AI will be everywhere
and in every object, as ubiquitous as oxygen. It can help us read
X-rays more accurately, pursue science more effectively, empower us to
understand foreign languages without studying and ensure that autonomous
vehicles behave the way we would like them to. There will be
breakthroughs in agriculture, medicine and science. Governments will
discover ways to combat inequality and crime.
But
we’re not there yet. And we maybe never will, says Gary Marcus, the
former head of AI at Uber and a professor at New York University. Marcus
– who participated in a robust exchange of views with DeepMind’s Demis
Hassabis at the Neural Information Processing Systems Conference in
December last year – is known for tempering the excitement within the
tech community regarding the progress of research into machine learning.
In a paper Deep Learning: A Critical Appraisal
published earlier this month, he outlines “concerns” that must be
addressed if the most widely known technique in artificial intelligence
research is to lead to general purpose AI. Marcus writes that the field
may be subject to “irrational exuberance” and outlines what he feels
might be done in order to move the field forward.
Marcus
argues that in instances when data sets are large enough and labelled,
and computing power is unlimited, then deep learning acts as a powerful
tool. However, “systems that rely on deep learning frequently have to
generalise beyond the specific data that they have seen, whether to a
new pronunciation of a word or to an image that differs from one that
the system has seen before, and where data are less than infinite, the
ability of formal proofs to guarantee high-quality performance is more
limited.”
The paper outlines ten areas in which Marcus argues deep learning has limitations; for instance, it’s need for vast labelled data sets. In applications such as image recognition, a more restricted volume of data can mean that deep learning has difficulty in generalising novel perspectives (a round sticker on a parking sign could be identified as, say, a ball).