Translate into a different language

Saturday, December 16, 2017

What AI can really do for your business (and what it can’t) | InfoWorld

Photo: Isaac Sacolick
"Artificial intelligence, machine learning, and deep learning are no silver bullets. A CIO explains what every business should know before investing in AI" according to Isaac Sacolick, author of Driving Digital: The Leader’s Guide to Business Transformation through Technology.


Photo: InfoWorld

How can you tell whether an emerging technology such as artificial intelligence is worth investing time into when there is so much hype being published daily? We’re all enamored by some of the amazing results such as AlphaGo beating the champion Go player, advances in autonomous vehicles, the voice recognition being performed by Alexa and Cortana, and the image recognition being performed by Google Photos, Amazon Rekognition, and other photo-sharing applications.

When big, technically strong companies like Google, Amazon, Microsoft, IBM, and Apple show success with a new technology and the media glorifies it, businesses often believe these technologies are available for their own use. But is it true? And if so, where is it true?

This is the type of question CIOs think about every time a new technology starts becoming mainstream:
  • To a CIO, is it a technology that we need to invest in, research, pay attention to, or ignore? How do we explain to our business leaders where the technology has applicability to the business and whether it represents a competitive opportunity or a potential threat?
  • To the more inquisitive employees, how do we simplify what the technology does in understandable terms and separate out the hype, today’s reality, and its future potential?
  • When select employees on the staff show interest in exploring these technologies, should we be supportive, what problem should we steer them toward, and what aspects of the technology should they invest time in learning?
  • When vendors show up marketing the facts that their capabilities are driven by the emerging technology and that they have expert PhDs on their staff supporting the product’s development, how do we evaluate what has real business potential versus services that are too early to leverage versus others that are really hype, not substance?
What artificial intelligence really is, and how it got there  
AI technology has been around for some time, but to me it got its big start in 1968-69 when the SHRDLU natural language processing (NLP) system came out, research papers on perceptrons and backpropagation were published, and the world became aware of AI through HAL in 2001: A Space Odyssey. The next major breakthroughs can be pinned to the late 1980s with the use of back propagation in learning algorithms and then their application to problems like handwriting recognition. AI took on large scale challenges in the late 1990s with the first chatbot (ALICE) and Deep Blue beating Garry Kasparov, the world chess champion.

I got my first hands-on experience with AI in the 1990s. In graduate school at the University of Arizona, several of us were programming neural networks in C to solve image-recognition problems in medical, astronomy, and other research areas. We experimented with various learning algorithms, techniques to solve optimization problems, and methods to make decisions around imprecise data.

If we were doing neural networks, we programmed the perceptron’s math by hand, then looped through the layers of the network to produce output, then looped backward to apply the backpropagation algorithms to adjust the network. We then waited long periods of time for the system to stabilize its output.

When early results failed, we were never sure if we were applying the wrong learning algorithms, hadn’t tuned our network optimally for the problem we were trying to solve, or simply had programming errors in the perceptron or backpropagation algorithms.

Flash-forward to today and it’s easy to see why there’s an exponential leap in AI results over the last several years thanks to several advances.

First, there’s cloud computing, which enables running large neural networks on a cluster of machines. Instead of looping through perceptrons one at a time and working with only one or two network layers, computation is distributed across a large array of computing nodes. This is enabling deep learning algorithms, which are essentially neural networks with a large number of nodes and layers that enable processing of large-scale problems in reasonable amounts of time.

Second, there’s the emergence of commercial and open source libraries and services like TensorFlow, Caffe, Apache MXNet, and other services providing data scientists and software developers the tools to apply machine learning and deep learning algorithms to their data sets without having to program the underlying mathematics or enable parallel computing. Future AI applications will be driven by AI on a chip or board driven by the innovation and competition among Nvidia, Intel, AMD, and others.
Read more... 

Source: InfoWorld   


If you enjoyed this post, make sure you subscribe to my Email Updates!

Machine vision firm runs AI deep learning on Nvidia platform | Electronics Weekly

"MVTec Software, a Munich-based machine vision specialist, says it it now possible to run deep learning functions on embedded boards with Nvidia Pascal architecture" continues Electronics Weekly.

HALCON's deep learning now on NVIDIA Jetson boards

The deep learning inference in the latest version of the firm’s Halcon machine vision software was successfully tested on Nvidia Jetson TX2 boards based on 64-bit Arm processors.

The deep learning inference, i.e., applying the trained CNN (convolutional neural network), almost reached the speed of a conventional laptop GPU (approx. 5 milliseconds), says MVTec...

Photo: Dr. Olaf Munkelt
Dr. Olaf Munkelt, managing director, MVTec Software, believes the rapidly growing market for embedded systems requires corresponding high-performing technologies.

“AI-based methods such as deep learning and CNNs, are becoming more and more important in highly automated industrial processes. We are specifically addressing these two market requirements by combining HALCON 17.12 with the NVIDIA Pascal architecture,” said Munkelt.
Read more...

Source: Electronics Weekly


If you enjoyed this post, make sure you subscribe to my Email Updates!

Why AI Could Be Entering a Golden Age | Knowledge@Wharton - Technology

The quest to give machines human-level intelligence has been around for decades, and it has captured imaginations for far longer — think of Mary Shelley’s Frankenstein in the 19th century. Artificial intelligence, or AI, was born in the 1950s, with boom cycles leading to busts as scientists failed time and again to make machines act and think like the human brain. But this time could be different because of a major breakthrough — deep learning, where data structures are set up like the brain’s neural network to let computers learn on their own. Together with advances in computing power and scale, AI is making big strides today like never before.


Photo: Frank Chen
After years of dashed hopes, we could be on the brink of large breakthroughs in artificial intelligence for businesses thanks to deep learning, says Frank Chen of Andreessen Horowitz. 

Photo: Knowledge@Wharton

Frank Chen, a partner specializing in AI at top venture capital firm Andreessen Horowitz, makes a case that AI could be entering a golden age. Knowledge@Wharton caught up with him at the recent AI Frontiers conference in Silicon Valley to talk about the state of AI, what’s realistic and what’s hype about the technology, and whether we will ever get to what some consider the Holy Grail of AI — when machines will achieve human-level intelligence.

An edited transcript of the conversation follows.

Knowledge@Wharton: What is the state of AI investment today? Where do we stand?
Frank Chen: I’d argue that this is a golden age of AI investing. To put it in historical context, AI was invented in the mid-1950s at Dartmouth, and ever since then we’ve basically had boom and bust cycles. The busts have been so dramatic in the AI space that they have a special name — AI winter.
We’ve probably had five AI winters since the 1950s, and this feels like a spring. A lot of things are working and so there are plenty of opportunities for start-ups to pick an AI technique, apply it to a business problem, and solve big problems. We and many other investors are super-active in trying to find those companies who are solving business problems using AI.

Knowledge@Wharton: What brought us out of this AI winter?

Chen: There’s a set of techniques called deep learning that when married with big amounts of data really gets very accurate predictions. For example, being able to recognize what is in a photo, being able to listen to your voice and figure out what you’re saying, being able to figure out which customers are going to churn. The accuracy of these predictions, because of these techniques, has gotten better than it has ever gotten. And that’s really what’s creating the opportunity.

Knowledge@Wharton: What are some of the big problems that AI is solving for business?

Chen: AI is working everywhere. To take one framework, think about the product lifecycle: You have to figure out what products or services to create, figure out how to price it, decide how to market and sell and distribute it so it can get to customers. After they’ve bought it, you have to figure out how to support them and sell them related products and services. If you think about this entire product lifecycle, AI is helping with every single one of those [stages].

For example, when it comes to creating products or services, we have this fantasy of people in a garage in Silicon Valley, inventing something from nothing. Of course, that will always happen. But we’ve also got companies that are mining Amazon and eBay data streams to figure out, what are people are buying? What’s an emerging category? If you think about Amazon’s private label businesses like Amazon Basics, product decisions are all data-driven. They can look to see what’s hot on the platform and make decisions like “oh, we have to make an HDMI cable, or we have to make a backpack.” That’s all data-driven in a way that it wasn’t 10 years ago.
Read more...

Source: Knowledge@Wharton


If you enjoyed this post, make sure you subscribe to my Email Updates!

Overcoming The Challenges Of Machine Learning Model Deployment | BCW - Business

Yvonne Cook, General Manager at DataRobot UK summarizes, "Our societies and economies are in transition to a future shaped by artificial intelligence (AI)." 

Photo: BCW

To thrive in this upcoming era, companies are transforming themselves by using machine learning, a type of AI that that allows software applications to make accurate predictions and recommend actions without being explicitly programmed.  

There are three ways that companies successfully transform themselves into AI-driven enterprises, differentiating them from the companies that mismanage their use of AI:
  • They treat machine learning as a business initiative, not a technical speciality.
  • They have higher numbers of machine learning models in production.
  • They have mastered simple, robust, fast, and repeatable ways to move models from their development environment into systems that form the operations of their business.
Commercial payback from AI comes when companies deploy highly-accurate machine learning models that operate robustly within the systems that support business operations. 

Why Companies Struggle With Model Deployment
While hard data is scarce, anecdotal evidence suggests that it is not uncommon for companies to train more machine learning models than they actually put into production. Challenges to organisation and technology are in play here, and success requires that both are addressed. From an organisational perspective, many companies see AI enablement as a technical speciality. This is a mistake.

AI is a business initiative. Becoming AI-driven requires that the people currently successful in operating and understanding the business can also create tomorrow’s revenue and be responsible for both building and maintaining the machine learning models that grow revenues. To succeed, these business drivers will need collaboration and support from specialists, including data scientists and the IT team.

Machine learning models must be trained on historic data, which demands the creation of a prediction data pipeline. This is an activity that requires multiple tasks including data processing, feature engineering, and tuning. Each task, down to versions of libraries and handling missing values, must be exactly duplicated from the development to the production environments, a task with which the IT team is intimately familiar...

Summary
AI and machine learning offer companies an opportunity to transform their operations. IT professionals play a critical role in ensuring that the models developed by their business peers and data scientists are suitably deployed to succeed in serving predictions that optimise business processes. Automated machine learning platforms allow business people to develop the models they need to transform operations while collaborating with specialists, including data scientists and IT professionals.

Choosing an enterprise-grade automated machine learning platform will certainly make IT’s life easier. By providing guidance on organising for successful model deployment and the choice of appropriate technology, IT executives ensure their teams are recognised for their effective contribution to the company’s success as it transforms into an AI-driven enterprise.
Read more...

Source: BCW


If you enjoyed this post, make sure you subscribe to my Email Updates!

Friday, December 15, 2017

How Machine Learning Can Help Identify Cyber Vulnerabilities | Harvard Business Review - Analytics

Ravi Srinivasan, vice president of strategy and offering management at IBM Security notes, "Putting the burden on employees isn’t the answer."

 Photo: Pedro Pestana/EyeEm/Getty Images

People are undoubtedly your company’s most valuable asset. But if you ask cybersecurity experts if they share that sentiment, most would tell you that people are your biggest liability.

Historically, no matter how much money an organization spends on cybersecurity, there is typically one problem technology can’t solve: humans being human.  Gartner expects worldwide spending on information security to reach $86.4 billion in 2017, growing to $93 billion in 2018, all in an effort to improve overall security and education programs to prevent humans from undermining the best-laid security plans. But it’s still not enough: human error continues to reign as a top threat.

According to IBM’s Cyber Security Intelligence Index, a staggering 95% of all security incidents involve human error. It is a shocking statistic, and for the most part it’s due to employees clicking on malicious links, losing or getting their mobile devices or computers stolen, or network administrators making simple misconfigurations. We’ve seen a rash of the latter problem recently with more than a billion records exposed so far this year due to misconfigured servers. Organizations can count on the fact that mistakes will be made, and that cybercriminals will be standing by, ready to take advantage of those mistakes.

So how do organizations not only monitor for suspicious activity coming from the outside world, but also look at the behaviors of their employees to determine security risks? As the adage goes, “to err is human” — people are going to make mistakes. So we need to find ways to better understand humans, and anticipate errors or behaviors that are out of character — not only to better protect against security risks, but also to better serve internal stakeholders.

There’s an emerging discipline in security focused around user behavior analytics that is showing promise in helping to address the threat from outside, while also providing insights needed to solve the people problem. It puts to use new technologies that leverage a combination of big data and machine learning, allowing security teams to get to know their employees better and to quickly identify when things may be happening that are out of the norm.

To start, behavioral and contextual data points such as the typical location of an employee’s IP address, the time of day they usually log into the networks, the use of multiple machines/IP addresses, the files and information they typically access, and more can be compiled and monitored to establish a profile of common behaviors. For example, if an employee in the HR team is suddenly trying to access engineering databases hundreds of times per minute, it can be quickly flagged to the security team to prevent an incident.
Read more...

Source: Harvard Business Review 


If you enjoyed this post, make sure you subscribe to my Email Updates!

AI and machine learning: Looking beyond the hype | FCW.com - Comment

Photo: Erin Hawley
"In every federal agency, critical insights are hidden within the massive data sets collected over the years" reports Erin Hawley, DataRobot's vice president of public sector. 

Photo: FCW.com

But because of a shortage of data scientists in the federal government, extracting value from this data is time consuming, if it happens at all. Yet with advances in data science, artificial intelligence (AI) and machine learning, agencies now have access to advanced tools that will transform information analysis and agency operations.

From predicting terror threats to detecting tax fraud, a new class of enterprise-grade tools, called automated machine learning, have the power to transform the speed and accuracy of federal decision-making through predictive modeling. Technologies like these that enable AI are changing the way the federal government understands and makes decisions.

To use tools like automated machine learning to their full potential to accelerate and optimize data science in the federal government, it’s important to start by understanding the terms used and what they mean.

Data science — the art of analyzing data 
Data science is a broad term, referring to the science and art of using data to solve problems. Rooted in statistics, this practice blends math, coding and domain knowledge to answer specific questions from a certain data set. Advances in computing power have transformed this from calculator-based statistical modeling into predictive algorithms that transform historical analysis into forecasts about future behaviors.
Read more... 

Source: FCW.com


If you enjoyed this post, make sure you subscribe to my Email Updates!

There is a new chapter in Harry Potter's story — and it was written by artificial intelligence | Business Insider

  • An artificial intelligence tool read all of the "Harry Potter" books and automatically generated a new, self-written chapter out of what it learned.
  • The output text was mostly raw and incomprehensible, so a few writers intervened to make it understandable.
  • The writing is mostly weird and borderline comical, but the machine managed to partly reproduce original writer J.K. Rowling's writing style.

There is a new chapter in Harry Potter's story, but it wasn't written by the original author, J.K. Rowling. Instead, an artificial intelligence (AI) algorithm did most of the hard work, The Verge first reported

A young Daniel Radcliffe in one of the Harry Potter movies.
Photo: Warner Brothers

The people over at Botnik Studio fed a computer's algorithmic tool with all of the original novels from Harry Potter's saga, and in return, it generated a three-page chapter titled "Harry Potter and the Portrait of What Looked Like a Large Pile of Ash."

The AI churned out the bulk of the text, but in order to transform it from your typical predictive-text word salad to something actually intelligible, a number of writers were involved. 

Chief among them is Jamie Brew, a former writer for The Onion and Clickhole, who had already worked on similar automated text prediction writings on Tumblr, where his objectdreams page includes procedurally generated, fictional work on X-Files, grammar rules, and even Craiglist ads...

The writing is as weird as it is fun, and it might be worth a few minutes of your time; if you agree, you can read the whole chapter here.  
Read more...

Source: Business Insider


If you enjoyed this post, make sure you subscribe to my Email Updates!

Artificial intelligence helps accelerate progress toward efficient fusion reactions | Princeton University

Before scientists can effectively capture and deploy fusion energy, they must learn to predict major disruptions that can halt fusion reactions and damage the walls of doughnut-shaped fusion devices called tokamaks. Timely prediction of disruptions, the sudden loss of control of the hot, charged plasma that fuels the reactions, will be vital to triggering steps to avoid or mitigate such large-scale events.


"Today, researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University are employing artificial intelligence to improve predictive capability" argue John Greenwald, Science Editor.

Image of plasma disruption in experiment on JET, left, and disruption-free experiment on JET, right. Training the FRNN neural network to predict disruptions calls for assigning weights to the data flow along the connections between nodes. Data from new experiments is then put through the network, which predicts “disruption” or “non-disruption.” The ultimate goal is at least 95 percent correct predictions of disruption events.
Photo: courtesy of Eliot Feibush
Photo: William Tang
Researchers led by William Tang, a PPPL physicist and a lecturer with the rank of professor in astrophysical sciences at Princeton, are developing the code for predictions for ITER, the international experiment under construction in France to demonstrate the practicality of fusion energy. 

Form of ‘deep learning’ 
The new predictive software, called the Fusion Recurrent Neural Network (FRNN) code, is a form of “deep learning” — a newer and more powerful version of modern machine learning software, an application of artificial intelligence. “Deep learning represents an exciting new avenue toward the prediction of disruptions,” Tang said. “This capability can now handle multi-dimensional data.”

FRNN is a deep-learning architecture that has proven to be the best way to analyze sequential data with long-range patterns. Members of the PPPL and Princeton machine-learning team are the first to systematically apply a deep learning approach to the problem of disruption forecasting in tokamak fusion plasmas.

Chief architect of FRNN is Julian Kates-Harbeck, a graduate student at Harvard University and a DOE-Office of Science Computational Science Graduate Fellow. Drawing upon expertise gained while earning a master’s degree in computer science at Stanford University, he has led the building of the FRNN software...

Princeton’s Tiger cluster 
Princeton University’s Tiger cluster of modern GPUs was the first to conduct deep learning tests, using FRNN to demonstrate the improved ability to predict fusion disruptions. The code has since run on Titan and other leading supercomputing GPU clusters in the United States, Europe and Asia, and has continued to show excellent scaling with the number of GPUs engaged.

The researchers seek to demonstrate that this powerful predictive software can run on tokamaks around the world and eventually on ITER.

Also planned is enhancement of the speed of disruption analysis for the increasing problem sizes associated with the larger data sets prior to the onset of a disruptive event.
Support for this project has primarily come to date from the Laboratory Directed Research and Development funds that PPPL provides.

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, New Jersey, is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. PPPL is managed by Princeton for the U.S. Department of Energy’s Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.
Read more... 

Source: Princeton University


If you enjoyed this post, make sure you subscribe to my Email Updates!

Artificial intelligence just discovered two new exoplanets | Popular Science - Technology

Mary Beth Griggs, assistant editor at Popular Science contributed research to this report.


Follow on Twitter as @robverger
"This is what happens when you turn machine learning loose on the cosmos" says Rob Verger, Assist. tech editor at at Popular Science.


The Kepler-90 system; AI helped discover the planet called Kepler-90i.
Photo: NASA/Wendy Stenze

A machine learning technique called a neural network has identified two new exoplanets in our galaxy, NASA scientists and a Google software engineer announced today, meaning that researchers now know about two new worlds thanks to the power of artificial intelligence.

Discovering new exoplanets—as planets outside our solar system are called—is a relatively common occurrence, and a key instrument that scientists use to identify them is the Kepler Space Telescope, which has already spotted a confirmed 2,525 exoplanets. But what’s novel about this announcement is that researchers used a AI system to spot these two new worlds, now dubbed Kepler-90i and Kepler-80g. The planet known as 90i is especially interesting to astronomers, as it brings the total number of known planets orbiting that star to eight, a tie with our own system. The average temperature on 90i is thought to be quite balmy: more than 800 degrees Fahrenheit.

Just as exoplanet discoveries are common, so too are neural networks, which is software that learns from data (as opposed to a program that have had rules programmed into it). Neural networks power language translation on Facebook, the FaceID system on the new iPhone X, and image recognition on Google Photos. A classic example of how a neural network learns is to consider pictures of cats and dogs—if you feed labeled images of cats into a neural network, later it should be able to identify new images that it thinks has cats in them because it has been trained to do so.

“Neural networks have been around for decades, but in recent years they have become tremendously successful in a wide variety of problems,” Christopher Shallue, a senior software engineer at Google AI, said during a NASA teleconference Thursday. “And now we’ve shown that neural networks can also identify planets in data collected by the Kepler Space Telescope.”

Astronomers need tools like telescopes to search for exoplanets, and artificial intelligence researchers need vast amounts of labeled data. In this case, Shallue trained the neural network using 15,000 labeled signals they already had from Kepler. Those signals, called light curves, are measures of how a star’s light dips when a planet orbiting it passes between the star and Kepler’s eye, a technique called the transit method. Of the 15,000 signals, about 3,500 were light curves from a passing planet, and the rest were false positives—light curves made by something like a star spot, but not an orbiting planet. That was so the neural network could learn the difference between light curves made by passing planets and signals from other phenomena.
Read more...

Source: Popular Science


If you enjoyed this post, make sure you subscribe to my Email Updates!

Tuesday, December 12, 2017

Making a career change? Get a comprehensive tour of computer science with these online courses | Mashable

"If you thought you missed your chance to major in computer science when you opted for art history in college, there's good news after all" says Mashable.

There are such things as second chances, and thanks to the influx of online learning, gaining a new skill set won't require you to dip into your savings. This Computer Science training is just $39 — that's equivalent to just 4.8 months of Netflix.

134 hours of all things robots and computer tech.
Photo: Pexels
The Computer Science Advancement Bundle features eight classes that will help you make a career in tech, no matter what you do now. Here's a breakdown of each course:

First, learn how to code 
Break Away: Programming And Coding Interviews
Photo: Pexels
A great introduction to tech jobs, this course will walk you through the job interview process for programming and coding careers. The team behind this course has conducted hundreds of interviews at Google and Flipkart, so they know what they're talking about and will give you the heads up on the kind of programming problems that might come up in an interview.

The Fintech Omnibus: Theory and Practice in Python, R, and Excel 
The Fintech Omnibus will walk you through risk modeling, factor analysis, numerical optimization, and linear and logistic regression using real models and examples. You'll learn a ton about value-at-risk, Eigenvalue decomposition, modeling risk with covariance matrices, and the method of least squares...

Get in on Big Data and Machine Learning 
The Big Data Omnibus: Hadoop, Spark, Storm, and QlikView
Photo: Pexels
After these 120 lectures on big data, you'll be able to install Hadoop in different modes, manipulate data in Spark, run a Storm topology in multiple modes, and use the QlikView In-memory data model. Using these tools, you'll glean insights from enormous amounts of data in the way both major and minor corporations do.


Machine Learning and TensorFlow on the Google Cloud 
TensorFlow is an open source software library for machine intelligence. Using TensorFlow and Google Cloud, you'll learn all about neural networks and machine learning principles...  

Get the Computer Science Advancement Bundle now for just $39 — a massive 97% discount off its $1,492 retail price.  
Read more...

Source: Mashable


If you enjoyed this post, make sure you subscribe to my Email Updates!