Industrial machines generate billions of data points every year. Somewhere, hidden in all that noise are the critical signals that pinpoint the possibility of partial or complete machine failure.
Identifying the series of datapoints that indicates your multi-million dollar machine is about to break down can help you prevent downtime, additional costs and long term damage. However, without an automated solution, it’s like finding a needle in an unimaginably large haystack.
With effective insights into the health of machines, manufacturers can solve issues before they get serious, and potentially save millions of dollars a year in repair costs and help extend machine lifetimes. The Industrial IoT market is worth $11 trillion and predictive maintenance can help companies save $630 billion over the next 15 years. But to realize these savings and benefits, traditional approaches to data and data science don’t contribute much.
So what are the primary challenges that industry faces when it comes to predictive maintenance? And how will fully-automated predictive maintenance improve services to clients and the bottom line for companies?
Finding the Signal in the Noise
It is well understood that maintenance done at the right time reduces costs. According to anexample analysis in a PMMI report, regular preventative maintenance carried out on a 10 year-old air compressor valued at $32,900 can extend the machine’s life for up to four years and will represent a saving of up to $6,359. These savings can, however, be increased with targeted predictive maintenance, based on effective predictive analytics, simply because there’s no more guesswork involved: an engineer can say with certainty which parts will need replacing and when.
How does this work? The fact is, industrial machines don’t just stop working. Failure is almost always the result of a chain of events. As one problem leads to another, a digital signal is created — think of it like symptoms of an illness. For machines connected to the IIoT, these symptoms are scattered over millions of data points, come from various different sensors at different times, and are stored in separate silos. Finding the critical signal amidst these millions of scattered data points is not humanly possible.
While many factories now have teams of data scientists in place to analyze this data and diagnose issues, current manual methodologies are simply not getting consistent results.
As Jon Sobel points out in an article for Techonomy, “Manufacturers generate data across massive, distributed operations, but […] until we can make use of the data we already have, collecting ever more data just buries us deeper.”
Unlike generic data patterns, machine data patterns are constantly changing, so prediction models become obsolete quickly and, because failure occurs at different points in a process, monitoring a single sensor on a machine doesn’t give a complete picture. A study from Gartner points out that 72 percent of manufacturing industry’sdata is unused due to the complexities involved with variables, such as pressure, temperature and time.
The reality is, manual monitoring and human-led analytics is not only inefficient, but also ineffective — especially when you consider that only around 25 percent of these signals truly denote a major failure event, and most are just false positives.
Let’s take an industrial washing machine operation for example. In this scenario, there are a number of challenges for the data science team monitoring operations. With around 75 different sensors per machine, with each providing millisecond-range feedback, there is a huge amount of data to sift through.