Search
Close this search box.

Running a data quality process

Table of contents
Share:
Discover Phoenix Data Platform
data quality

Data quality is a major issue for organisations. Poor quality data can be expensive; research by MIT Sloan indicates that neglecting data quality can reduce revenue by 15 to 25%.

These losses can be quantified not only in terms of missed opportunities linked to poor decision-making, or reputational harm, but also legal penalties (e.g. for non-compliance) and the time spent in finding, housekeeping and correcting wrong data.

In contrast, high quality data allows businesses to improve their operational performance, boost customer satisfaction and be more competitive in swiftly reorienting business strategy if need be.

What quality criteria attach to data?

According to a report from PWC, Micropole, EBG, data quality refers to the ability of data’s entire intrinsic characteristics (freshness, availability, functional and/or technical consistency, traceability, security, completeness) to meet an organisation’s internal (management, decision-making, etc.) and external (regulations, etc.) requirements.

Data has no intrinsic quality. Quality can only be judged once the use to which data is to be put is known: What is the ultimate objective? How will it be processed? Is the information given any semantic meaning? In other words, quality is defined as a function of use, as expected by users.

This presupposes both high-level and detailed knowledge of business processes that span the entire organisation, and the standards in force to enable data interchange both internally and externally.

The GDPR sets well-defined limitations on the processing of personal data, throughout its lifespan. Data stored or used outside the framework set by regulation cannot be viewed as ‘quality data’ even if it does add efficiency and value to the organisation.

Considering all these points, data quality can be judged against various yardsticks including its profile, accuracy, completeness, compliance, integrity, consistency, availability, applicability, intelligibility, integration, flexibility, comparability, and so on. The list of criteria is infinitely varied!

mdm white paper

Master Data Management : data quality and traceability at the heart of your information system

Reasons for implementing data quality management

A data quality process is not restricted to loading the right data into information systems. It also means eliminating erroneous, corrupted or duplicate data.

While errors can have a technical cause, they are usually caused by human or organisational shortcomings at different stages in the lifecycle and different places in the IS:

  • When collecting data, through intentional or unintentional data entry errors;
  • When sharing data, by creating more than one version of a data item;
  • When exporting data, through poorly-defined rules or compatibility problems
  • When maintaining data through poor encoding.

Data quality management refers to the ability to provide reliable data meeting users’ functional business and technical requirements, in other words, to transform high-quality data into useful information.

Schedule a call

Want to discuss your data management challenges with an expert?

Auteur
Edouard Cante
Executive Vice President Product Technical and functional expert, Edouard has specialized in IS urbanization and data governance for nearly 20 years. A man of the field, he and his teams support customers in their projects, and don’t hesitate to use this feedback to shape the product roadmap and gain in agility.
Dans la catégorie Master Data Management