What sort of silicon brain do you need for artificial intelligence?

What sort of silicon brain do you need for artificial intelligence?

The Raspberry Pi is one of the most exciting developments in hobbyist computing today. Across the world, people are using it to automate beer making, open up the world of robotics and revolutionise STEM education in a world overrun by film students. These are all laudable pursuits. Meanwhile, what is Microsoft doing with it? Creating squirrel-hunting water robots.

Over at the firm’s Machine Learning and Optimization group, a researcher saw squirrels stealing flower bulbs and seeds from his bird feeder. The research team trained a computer vision model to detect squirrels, and then put it onto a Raspberry Pi 3 board. Whenever an adventurous rodent happened by, it would turn on the sprinkler system.

Microsoft’s sciurine aversions aren’t the point of that story – its shoehorning of a convolutional neural network onto an ARM CPU is. It shows how organizations are pushing hardware further to support AI algorithms. As AI continues to make the headlines, researchers are pushing its capabilities to make it increasingly competent at basic tasks such as recognizing vision and speech.

As people expect more of the technology, cramming it into self-flying drones and self-driving cars, the hardware challenges are increasing. Companies are producing custom silicon and computing nodes capable of handling them.

Jeff Orr, research director at analyst firm ABI Research, divides advances in AI hardware into three broad areas: cloud services, on‑device, and hybrid. The first focuses on AI processing done online in hyperscale data centre environments like Microsoft’s, Amazon’s and Google’s.

At the other end of the spectrum, he sees more processing happening on devices in the field, where connectivity or latency prohibit sending data back to the cloud.

“It’s using maybe a voice input to allow for hands-free operation of a smartphone or a wearable product like smart glasses,” he says. “That will continue to grow. There’s just not a large number of real-world examples on‑device today.” He views augmented reality as a key driver here. Or there’s always this app, we suppose.

Finally, hybrid efforts marry both platforms to complete AI computations. This is where your phone recognizes what you’re asking it but asks cloud-based AI to answer it, for example.

The cloud’s importance stems from the way that AI learns. AI models are increasingly moving to deep learning, which uses complex neural networks with many layers to create more accurate AI routines.

There are two aspects to using neural networks. The first is training, where the network analyses lots of data to produce a statistical model. This is effectively the “learning” phase. The second is inference, where the neural network then interprets new data to generate accurate results. Training these networks chews up vast amounts of computing power, but the training load can be split into many tasks that run concurrently. This is why GPUs, with their double floating point precision and huge core counts, are so good at it.

Nevertheless, neural networks are getting bigger and the challenges are getting greater. Ian Buck, vice president of the Accelerate Computing Group at dominant GPU vendor Nvidia, says that they’re doubling in size each year. The company is creating more computationally intense GPU architectures to cope, but it is also changing the way it handles its maths.

“It can be done with some reduced precision,” he says. Originally, neural network training all happened in 32‑bit floating point, but it has optimized its newer Volta architecture, announced in May, for 16‑bit inputs with 32‑bit internal mathematics.

Reducing the precision of the calculation to 16 bits has two benefits, according to Buck.

“One is that you can take advantage of faster compute, because processors tend to have more throughput at lower resolution,” he says. Cutting the precision also increases the amount of available bandwidth, because you’re fetching smaller amounts of data for each computation.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

What Most People Don’t Understand About AI

8 Jan, 2021

To say that artificial intelligence (AI) is the next step in enterprise would be an understatement. AI has already become …

Read more

Why Culture Is the Greatest Barrier to Data Success

4 Oct, 2020

In order to compete in the new digital economy, businesses must become increasingly data-driven. Few executives would dispute this objective. …

Read more

Choosing a Machine Learning Model

2 Oct, 2019

The part art, part science of picking the perfect machine learning model. The number of shiny models out there can …

Read more

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.