I’ve had several meetings lately on data management, and especially integration, where the ability to explore alternatives has been critical. And the findings from our internet of things (IoT) early adopters survey confirms that the ecosystem nature of data sources in IoT deployments means we need to expand the traditional toolbox for data integration.
For many customers, facilitating better analytics with data integration means building ever larger data warehouses. But this takes time – and ‘time is money.’ If it takes too long to get the required insights, business users will just give up and go with their gut instinct.
One alternative to larger data warehouses is virtualising the data. This technique simplifies increasingly complex data architectures. It’s an approach that provides in-memory caching and a centralised area to monitor and create queries, without having to move or copy the underlying data.
There are several problems that can be overcome with this virtualisation approach:
Quality: The data is not being moved, so there’s no risk of errors that typically arise during such transfers. Analysts can draw on several different sources of data, which is likely to give them better information and improved insights – all while the original data remains with its custodians.
Currency: Data virtualisation allows real-time use even of frequently-updated data because you can take the ‘current’ data at any given point, even if it's in the process of being updated. In other words, you can have real-time access to data without damaging it, or putting pressure on the source.
Legacy must be taken into account at every stage – why do we have legacy data? What value are we getting from it now, and what value could we get in the future?
Security: It’s usually hard to get business users interested in data security.