Real or virtual? The two faces of machine learning

Real or virtual? The two faces of machine learning

There's a lot of sci-fi-level buzz lately about smart machines and software bots that will use big data and the Internet of things to become autonomous actors, such as to schedule your personal tasks, drive your car or a delivery truck, manage your finances, ensure compliance with and adjust your medical activities, build and perhaps even design cars and smartphones, and of course connect you to the products and services that it decides you should use.

That's Silicon Valley's path for artificial intelligence/machine learning, predictive analytics, big data, and the Internet of things. But there's another path that gets much less attention: the real world. It too uses AI, analytics, big data, and the Internet of things (aka the industrial Internet in this context), though not in the same manner. Whether you're looking to choose a next-frontier career path or simply understand what's going on in technology, it's important to note the differences.

Read Also:
5 ways Big Data Will Shape The Enterprise In 2016

A recent conversation with Colin Parris, the chief scientist at manufacturing giant General Electric, crystalized in my mind the different paths that the combination of machine learning, big data, and IoT are on. It's a difference worth understanding.

In the real world -- that is, the world of physical objects -- computational advances are focused on perfecting models of those objects and the environments in which they operate. Engineers and scientists are trying to build simulacra so that they can model, test, and predict from those virtual versions what will happen in the real world.

As Parris explained, the goal of these simulacra is to predict when (and what) maintenance is needed, so airplanes, turbines, and so forth aren't taken offline for regular inspections and maintenance checks. Another goal is to predict failure before it happens, so airplanes don't lose their engines or catch fire in midflight, turbines don't overheat and collapse, and so forth.

Read Also:
Why Cortana's new boss is obsessed with artificial intelligence

Those are long-held goals of engineering simulations; modern computing technology has made those simulacra more and more accurate, allowing them to be used increasingly as virtual twins of the real thing. Higher computing power, big data storage and processing, and connectivity of devices via sensors, local processors, and networks (the industrial Internet) have made those virtual twins more and more possible. That means less guesswork ("extrapolation," in engineering parlance) and more certainty, which means fewer high-cost failures and fewer large-cost planned service outages for checks.

There's another goal, made possible only recently by those industrial Internet technology advances: machine-to-machine learning. Parris' example was a windmill farm. Old turbines could share their experience and status with new ones, so new ones could adjust their models based on the local experience, as well as validate their local responses based on the experiences of other turbines before making adjustments or signaling an alarm.

The same notions and advances anchor the self-driving car efforts, which have long roots in robotics and AI work at Carnegie-Mellon University, MIT, IBM Research, and other organizations. (I was editing papers on these topics 30 years ago at IEEE!) But they have become more possible due to those advances in computing, networking, big data analytics, and sensors.

Read Also:
Crunching Big Data for Operations Management

All of these industrial Internet and robotics notions rely on highly accurate models and measurements: the more perfect, the better. That's engineering in a nutshell.

Then there's the other approach to virtual assistants, bots, and recommendation engines.


Leave a Reply

Your email address will not be published. Required fields are marked *