Data quality is a major issue for organisations. Poor quality data can be expensive; research by MIT Sloan indicates that neglecting data quality can reduce revenue by 15 to 25%.
These losses can be quantified not only in terms of missed opportunities linked to poor decision-making, or reputational harm, but also legal penalties (e.g. for non-compliance) and the time spent in finding, housekeeping and correcting wrong data.
In contrast, high quality data allows businesses to improve their operational performance, boost customer satisfaction and be more competitive in swiftly reorienting business strategy if need be.
What quality criteria attach to data?
According to a report from PWC, Micropole, EBG, data quality refers to the ability of data’s entire intrinsic characteristics (freshness, availability, functional and/or technical consistency, traceability, security, completeness) to meet an organisation’s internal (management, decision-making, etc.) and external (regulations, etc.) requirements.
Data has no intrinsic quality. Quality can only be judged once the use to which data is to be put is known: What is the ultimate objective? How will it be processed? Is the information given any semantic meaning? In other words, quality is defined as a function of use, as expected by users.
This presupposes both high-level and detailed knowledge of business processes that span the entire organisation, and the standards in force to enable data interchange both internally and externally.
The GDPR sets well-defined limitations on the processing of personal data, throughout its lifespan. Data stored or used outside the framework set by regulation cannot be viewed as ‘quality data’ even if it does add efficiency and value to the organisation.
Considering all these points, data quality can be judged against various yardsticks including its profile, accuracy, completeness, compliance, integrity, consistency, availability, applicability, intelligibility, integration, flexibility, comparability, and so on. The list of criteria is infinitely varied!
Reasons for implementing data quality management
A data quality process is not restricted to loading the right data into information systems. It also means eliminating erroneous, corrupted or duplicate data.
While errors can have a technical cause, they are usually caused by human or organisational shortcomings at different stages in the lifecycle and different places in the IS:
- When collecting data, through intentional or unintentional data entry errors;
- When sharing data, by creating more than one version of a data item;
- When exporting data, through poorly-defined rules or compatibility problems
- When maintaining data through poor encoding.
Data quality management refers to the ability to provide reliable data meeting users’ functional business and technical requirements, in other words, to transform high-quality data into useful information.
What are the points to watch and best practices to follow to ensure data integrity?
Introduction: the basics of data integrity Data integrity means the certainty that data is…
What are the 4 key stages in data quality assurance?
Data quality assurance: obvious and vital work All business processes are based on data…
The data steward, a lynchpin in data governance
The data steward, a lynchpin in data governance The amount of data collected by businesses is…
Running a data quality process
Data quality is a major issue for organisations. Poor quality data can be expensive; research…
Resolving the issue of data silos within a business
Data silos are still a fact of life for very many organisations. Fragmented by the practices…
Five steps to manage master data effectively
As business goes global, organisations are driven to undergo major transformations, to…
Data’s omnipresence and choosing between MDM and PIM
The omnipresence of data in all areas of corporate strategy nowadays has to be acknowledged,…