Every day we all make thousands of decisions, many of them are made subconsciously or are based on minimal information. And we often make them lightning fast – even when it comes to making decisions at work. So clearly we often rely on our intuition and gut feeling.
Our intuition may often be better than its reputation. But in professional environments the important decisions that have to be made should be based on hard facts and careful analysis. Unfortunately, wishes and reality are often worlds apart: Studies prove that not even a third of all companies surveyed made their last major decisions on the basis of systematic data analysis.
Data-driven companies are well aware of the significance of their data and see it as the key foundation for making decisions. For companies like this, systematic data analysis is the true answer to staying ahead of the competition. And that brings us to the topic of “Analytics for the masses”, because in reality decisions are usually made locally, i.e. lots of people in many different departments contribute towards the decision-making process.
The good news is that more and more data is becoming available. As the amount of data is growing at an explosive rate (40% each year for the next decade) it should really stand data-based decision-making in good stead. However, studies show that even now, in the days of high-performance IT, the great majority of all decisions are made without sufficiently strong data to base them on – and sometimes this can have devastating consequences. That means there are still entrepreneurial opportunities to be had by making better decisions based on data.
So what is holding us back from making better decisions on the basis of as much of this available data as possible? Quite a lot! Here are five key areas you should pay attention to:
Companies are highly interconnected organizational structures and the amount of data they process is increasing exponentially. In spite of that, data silos are still to be found within many companies, so some areas of business which really belong together are still viewed separately. Isolated data pools emerge in different specialized departments covering aspects which may seem to have little to do with each other. Relations between data are often not established and the variety of structures and inconsistent master data make it difficult to get a coherent overview of the right data to form a basis for decision-making. Novel, often external, data sources (big data, for example, as found in social media) tend to make this challenge even greater.
Overcoming this challenge requires professional tools for data integration which take account of big data and master data management, like those delivered by the Talend Platform. Similarly, an infrastructure for storing data has to be created which implements newer concepts and technologies from the world of big data while complementing the classic data warehouse.
“Garbage in, garbage out” unfortunately also applies to decisions based on data – especially when decision-makers don’t have suitable information about the quality of the data they can use. Surveys confirm that data quality is still one of the greatest obstacles to carrying out significant analyses. And in the field of big data, analyses also show that a major amount of effort is still going into cleansing that data.
A multilevel approach is needed dealing with the enterprise-wide issue of data quality. On the one hand it is essential to create a continuous overview of data quality. On the other hand, any unreliable data must be removed or cleansed. In some cases, this happens automatically with a rules-based approach. In other cases, a departments’ expertise must be utilized with users getting involved in a manual data cleansing process. It is important, and clearly makes sense, to establish a comprehensive process to achieve sustainable improvements to data quality.