The volume of data within companies is growing day by day due to new data – most of which comes from the uncontrolled proliferation of data copies. This avalanche of data is a great challenge for businesses having to manage it efficiently and securely. So, where does this copy data come from?
Copy data is redundantly generated copies of production data for purposes such as backup, disaster recovery, test and development, analytics, snapshots or migrations. According to IDC, companies might face up to 120 copies of certain production data in circulation.
In addition, IDC estimated in a 2013 study that companies spend up to $44 billion worldwide managing redundant copies of of data. According to IDC, 85% of the storage hardware investments, and 65% in storage software, are owed to data copies. Their management in the company is now taking more resources than the actual production data. Therefore, IT departments are faced with the question of how to control data growth caused by redundant copies through cost-effective data management. This applies both to companies that hold data in their own house and data centre operators.
The virtualisation of data copies has proven to be an effective measure to take data management to the next level. By further integration of global data de-duplication and optimisation of the network utilisation, very efficient data handling is possible. Since less bandwidth and storage is required, very short recovery times can be achieved.
An increasing level of virtualisation in the data centre is clearly noticeable. Data virtualisation represents the next consequent step after server, compute and network virtualisation. Virtualised infrastructures are easier to manage, more energy and cost-effective, because the model is demand-oriented compared to traditional environments. It matches today’s reality where an increasing amount of challenges in the data centre need to be managed with fewer resources.