Photo: Huawei |
The iPhone X has a Neural Engine as part of its A11 Bionic chip; the Huawei Kiri 970 chip has what’s called a Neural Processing Unit or NPU on it; and the Pixel 2 has a secret AI-powered imaging chip that just got activated. So what exactly are these next-gen chips designed to do?
As mobile chipsets have grown smaller and more sophisticated, they’ve started to take on more jobs and more different kinds of jobs. Case in point, integrated graphics—GPUs now sit alongside CPUs at the heart of high-end smartphones, handling all the heavy lifting for the visuals so the main processor can take a breather or get busy with something else.
The new breed of AI chips are very similar—only this time the designated tasks are recognizing pictures of your pets rather than rendering photo-realistic FPS backgrounds.
What we talk about when we talk about AI
AI, or artificial intelligence, means just that. The scope of the term tends to shift and evolve over time, but broadly speaking it’s anything where a machine can show human-style thought and reasoning.
A person hidden behind a screen operating levers on a mechanical robot is artificial intelligence in the broadest sense—of course today’s AI is way beyond that, but having a programmer code responses into a computer system is just a more advanced version of getting the same end result (a robot that acts like a human).
As for computer science and the smartphones in your pocket, here AI tends to be more narrowly defined. In particular it usually involves machine learning, the ability for a system to learn outside of its original programming, and deep learning, which is a type of machine learning that tries to mimic the human brain with many layers of computation. Those layers are called neural networks, based on the neural networks inside our heads.
So machine learning might be able to spot a spam message in your inbox based on spam it’s seen before, even if the characteristics of the incoming email weren’t originally coded into the filter—it’s learned what spam email is.
Deep learning is very similar, just more advanced and nuanced, and better at certain tasks, especially in computer vision—the “deep” bit means a whole lot more data, more layers, and smarter weighting. The most well-known example is being able to recognize what a dog looks like from a million pictures of dogs.
Plain old machine learning could do the same image recognition task, but it would take longer, need more manual coding, and not be as accurate, especially as the variety of images increased. With the help of today’s superpowered hardware, deep learning (a particular approach to machine learning, remember), is much better at the job.
Read more...
Source: Gizmodo