What do you call big data on steroids? The Internet of Things
While big data projects may seem a complicated beast to many IT professionals, working on an Internet of Things (IoT) project will likely make the former seem simple.
The sheer velocity required for IoT projects is immense. The term “expanding velocity demands” is likely familiar to those who have worked on big data projects, being brought up in relation to data storage and a system’s ability to handle the increasing influx of data.
Entire architectures and technologies, Hadoop for example, have been created in response, enabling real-time storage of large data volumes.
When working on an IoT project, however, organisations need to keep in mind not only real-time storage requirements, but also the crucial need to enable real-time analysis and decision-making.
>See also: Big data and the Internet of Things to add £322BN to UK economy by 2020 – report
The velocity and volume of IoT data will make the current big data examples look pale. For example, Twitter is often mentioned as a source of big data, as the number of tweets during the day can reach hundreds of millions.
In contrast, for IoT, companies need to be able to ingest hundreds of thousands, or even millions, of events per second from their devices.
These organisations are looking for examples and use cases in the area of predictive maintenance and superior servitisation. This means they are aiming to architect for real-time predictive analytics and the ability to trigger the processes within seconds after certain critical patterns have been detected.
One big difference between big data and IoT projects is time. While in big data projects it is perfectly normal for data to rest before it is used in any kind of analysis, in any IoT project time is of the absolute essence.