Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Saturday, December 16, 2017

What AI can really do for your business (and what it can’t) | InfoWorld

Photo: Isaac Sacolick
"Artificial intelligence, machine learning, and deep learning are no silver bullets. A CIO explains what every business should know before investing in AI" according to Isaac Sacolick, author of Driving Digital: The Leader’s Guide to Business Transformation through Technology.


Photo: InfoWorld

How can you tell whether an emerging technology such as artificial intelligence is worth investing time into when there is so much hype being published daily? We’re all enamored by some of the amazing results such as AlphaGo beating the champion Go player, advances in autonomous vehicles, the voice recognition being performed by Alexa and Cortana, and the image recognition being performed by Google Photos, Amazon Rekognition, and other photo-sharing applications.

When big, technically strong companies like Google, Amazon, Microsoft, IBM, and Apple show success with a new technology and the media glorifies it, businesses often believe these technologies are available for their own use. But is it true? And if so, where is it true?

This is the type of question CIOs think about every time a new technology starts becoming mainstream:
  • To a CIO, is it a technology that we need to invest in, research, pay attention to, or ignore? How do we explain to our business leaders where the technology has applicability to the business and whether it represents a competitive opportunity or a potential threat?
  • To the more inquisitive employees, how do we simplify what the technology does in understandable terms and separate out the hype, today’s reality, and its future potential?
  • When select employees on the staff show interest in exploring these technologies, should we be supportive, what problem should we steer them toward, and what aspects of the technology should they invest time in learning?
  • When vendors show up marketing the facts that their capabilities are driven by the emerging technology and that they have expert PhDs on their staff supporting the product’s development, how do we evaluate what has real business potential versus services that are too early to leverage versus others that are really hype, not substance?
What artificial intelligence really is, and how it got there  
AI technology has been around for some time, but to me it got its big start in 1968-69 when the SHRDLU natural language processing (NLP) system came out, research papers on perceptrons and backpropagation were published, and the world became aware of AI through HAL in 2001: A Space Odyssey. The next major breakthroughs can be pinned to the late 1980s with the use of back propagation in learning algorithms and then their application to problems like handwriting recognition. AI took on large scale challenges in the late 1990s with the first chatbot (ALICE) and Deep Blue beating Garry Kasparov, the world chess champion.

I got my first hands-on experience with AI in the 1990s. In graduate school at the University of Arizona, several of us were programming neural networks in C to solve image-recognition problems in medical, astronomy, and other research areas. We experimented with various learning algorithms, techniques to solve optimization problems, and methods to make decisions around imprecise data.

If we were doing neural networks, we programmed the perceptron’s math by hand, then looped through the layers of the network to produce output, then looped backward to apply the backpropagation algorithms to adjust the network. We then waited long periods of time for the system to stabilize its output.

When early results failed, we were never sure if we were applying the wrong learning algorithms, hadn’t tuned our network optimally for the problem we were trying to solve, or simply had programming errors in the perceptron or backpropagation algorithms.

Flash-forward to today and it’s easy to see why there’s an exponential leap in AI results over the last several years thanks to several advances.

First, there’s cloud computing, which enables running large neural networks on a cluster of machines. Instead of looping through perceptrons one at a time and working with only one or two network layers, computation is distributed across a large array of computing nodes. This is enabling deep learning algorithms, which are essentially neural networks with a large number of nodes and layers that enable processing of large-scale problems in reasonable amounts of time.

Second, there’s the emergence of commercial and open source libraries and services like TensorFlow, Caffe, Apache MXNet, and other services providing data scientists and software developers the tools to apply machine learning and deep learning algorithms to their data sets without having to program the underlying mathematics or enable parallel computing. Future AI applications will be driven by AI on a chip or board driven by the innovation and competition among Nvidia, Intel, AMD, and others.
Read more... 

Source: InfoWorld