FYI.

This story is over 5 years old.

Tech

Why Dedicated Machine Learning Hardware Makes Sense for Your Next Smartphone

In order to take our phones into the future, we need a new type of co-processor dedicated to machine learning.
Image: Laura Hutton/Shutterstock

Don't be surprised if Apple touts a custom machine learning chip as a key feature during its iPhone 8 reveal this fall. A report from Bloomberg suggests that Apple is working on a dedicated chip to power AI on its mobile devices. It makes a lot of sense, and I'd be shocked if Apple was the only one doing this. Dedicated machine learning circuitry is likely going to be an important part of all future mobile system-on-chip architectures. It's a matter of when, not if.

Advertisement

Apple debuted Core ML at its developer conference last week. Image: Apple

Mobile SoCs are a combination of several individual processors and coprocessors, with blocks of circuitry dedicated to specific tasks. Mobile computing is all about power use, and it making a circuit to process image data from the camera sensors or decode compressed digital audio files simply lets the phone do those things far more efficiently than using the CPU or graphics processor (GPU).

Take motion coprocessors, for example. As phones became packed with sensors, phone makers began making dedicated chips to analyze the data from those sensors, saving battery life and allowing our phones to detect motion all the time, even when in sleep mode. Apple developed its own M7 motion coprocessor in 2013, debuting it with the iPhone 5s. It has since developed several newer versions, eventually incorporating the M9 motion chip into the main A9 SoC in 2016.

Android phones have something similar: dedicated digital signal processors (DSPs) to handle sensor data began appearing in Android SoCs back in 2013 with the Motorola X8 chip. These days, all Android phones feature SoCs that have built-in circuitry dedicated to sensor data processing.

The benefits of this dedicate hardware are obvious. Our phones can "listen" to many of its sensors all the time with very little battery drain. They can detect when we pick them up and put them back down. They can hear us say "OK Google" or "Hey Siri" even while locked. They know when they're in a bag or pocket and will keep the screen off. They can count our steps all day long.

Advertisement

The next battlefield: Machine Learning

But the next big battlefield in computing is machine learning. It's the foundation of artificial intelligence, but useful for so much more. The ability of computers to quickly analyze, form relationships, and act upon large sets of data is going to change everything. From the image analysis needed for augmented reality, self-driving cars, and face recognition to detecting speech and translating voices, we're just at the tip of the iceberg for what machine learning techniques can accomplish. It doesn't matter if you're talking about a computer in your pocket, car, laptop, or in a big server farm: machine learning is The Next Big Thing.

Just a few of the ways machine learning is used on photos. Image: Apple

Just as all those phone sensors were becoming ubiquitous a few years ago, necessitating dedicated hardware for optimum performance and power use, we're now in the early days of really useful machine learning and AI.

Consider the comments Apple CEO Tim Cook made this February to The Independent. Speaking about AR, he said, "I regard it as a big idea like the smartphone. The smartphone is for everyone, we don't have to think the iPhone is about a certain demographic, or country or vertical market: it's for everyone. I think AR is that big, it's huge." At the core of making augmented reality work well you'll find the fundamentals of machine learning.

The upcoming Google Lens is just the latest expression of Google's AI and machine learninng efforts. Image: Google

Obviously Google sees things the same way. It spent the most time at its Google I/O developer conference talking about AI and machine learning, touting new projects like Google Lens. Point your camera at a marquee, and it will recognize the band and play music, or put the date and location for the show in your calendar. Google released a set of open-source machine learning tools called TensorFlow two years ago. Last year it unveiled custom server hardware to make those tasks faster: the TPU (Tensor Processing Unit). This year, Google showed a second-generation TPU that's much faster. And most importantly, it announced TensorFlow Lite: code and APIs to run machine learning tasks natively on mobile devices without uploading data to the cloud.

Just last week, Apple demonstrated a lot of the new features coming in iOS 11 this fall, many of which are reliant on machine learning (like improvements to Siri and Photos). And for developers, iOS 11 will include Core ML, a set of APIs to let developers use machine learning in their apps. There's also ARKit to make augmented reality apps, which also relies on machine learning algorithms.

On current phones, Tensorflow Lite and Core ML functions are carried out by the CPU and GPU. That works, but it's hardly ideal. To really make machine learning code fly without eating your battery the way whales eat krill, we're going to need specialized processors. Can there be any doubt that mobile chip makers, whether Apple, Qualcomm, Samsung, or even Google itself, are working on hardware dedicated to speeding up machine learning on mobile devices? In a couple years, promoting the machine learning performance of a new phone or tablet will likely be as commonplace as bragging about CPU and GPU speed is today. That's not a contest that Apple or Google is likely to forfeit.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.