Translate to multiple languages

Subscribe to my Email updates
Enjoy what you've read, make sure you subscribe to my Email Updates

Wednesday, August 28, 2019

Aspects of Machine Learning on the Edge | Machine Learning -

John Fogarty, advisory software engineer at Base2 Solutions says, Machine learning (ML) is hard. 

Making it work within the confined environment of an embedded device can easily become a quagmire unless we consider, and frequently revisit, the design and deployment aspects crucially affected by ML requirements. A bit of upfront planning makes the difference between project success and failure.

For this article, our focus is on building commercial-grade applications with significant, or even dominant, ML components. Edge devices, especially ML enabled ones, don’t operate in isolation; they form just one element of a complex automated pipeline.

You have a device, or better yet, an idea for one which will perform complex analytics, usually in something close to real time and deliver results as network traffic, user data displays, machine control or all three. The earlier you are in the design process, the better positioned you’ll be to adjust your hardware and software stack to match the ML requirements. The available tools (especially at the edge) are neither mature, nor general purpose. The more flexible you are, the better your odds of building a viable product.

Let’s start by describing a hypothetical device and we’ll work through some ML considerations of the design. As we discuss the design, we’ll visit and revisit DevOps automations that go hand in hand with these other engineering processes...

What else will affect our choices here? The nature of the inputs can really matter. If our camera visualizes traffic on a busy roadway, or people in an airport terminal, we can expect almost every frame will contain something of interest; if we’re monitoring the bottom floor of a parking garage–not so much.