There’s no shortage of talk today about newfangled tools, technologies and concepts. The Internet of Things, big data, cloud computing, Hadoop, and countless other new terms, apps and trends have inundated many business folks over the last few years.
Against this often confusing backdrop, it’s easy to forget the importance of basic blocking and tackling. Yes, I’m talking about good old-fashioned data quality, something that still vexes many departments, groups, organizations and industries. Without further ado, here are the five biggest data-quality mistakes that organizations routinely make.
Assuming that the IT department is responsible for data quality.
In the decade that I spent as a enterprise system consultant, this was one of my biggest pet peeves – and rants over beers with fellow consultants. Line-of-business employees would carelessly enter errant, duplicate or incomplete records with nary a regard for the implications of their actions. Yet, mysteriously, IT was supposed to know and cleanse this information. It never made sense to me, but I understood the “rationale.” This is an extension of the IT-business divide, a topic I addressed in a three-part series not too long ago.
On a project in the latter part of a year, I helped an organization implement a new HR and payroll system. Its data was, to put it mildly, a mess. Given that (as well as a cauldron of other issues), its desired activation date of January 1 was beyond optimistic. It was downright laughable.