Management
Data quality metrics are critical to proving the value of your data governance strategy to business users. However, selecting the right metrics can be challenging. In this article, Jason McClain of Deloitte & Touche, LLP, illustrates a four-point strategy for choosing the right metrics for your company.
Neglecting data governance can lead to several issues, including poor user adoption. If records are disorganized or unexplained, users might reject a new system. Data needs to be easily understood and located in order for users to embrace a new system.
Of course, having an effective data governance strategy only takes you so far. You also have to prove to your business users that the strategy is necessary to ensure the ongoing quality of data. Data quality can degrade over time due to a lack of resources or other behind-the-scenes issues, so it is important to track potential problems using metrics.
“Data governance is something that sounds logical and sensible, yet many organizations find it difficult to value or judge the return on investment,” says Jason McClain, senior manager, Deloitte & Touche LLP. “So what I’ve personally found successful is using metrics to really explain that your data is being controlled and maintained in a good fashion, that it’s not degrading. That right there can demonstrate the value of a good governance process.”
Agreeing on appropriate metrics can be challenging, according to McClain. Business users may want simplicity whereas the data governance function may require more complex metrics. However, over the course of several implementations, McClain has identified four key elements of a good data quality metric strategy (Figure 1):
- Know your audience
- Know what to measure
- Know when and how to report
- Know your goals
By adhering to these four elements, McClain says organizations can create data quality metrics to satisfy their needs and identify areas in need of improvement.

Figure 1
The four key elements of good data quality metrics
Know Your Audience
Tailoring your data metrics to the needs of your specific audience is essential to the success of your program, says McClain. Too many metrics may cause confusion, while metrics that aren’t focused on a user’s job segment provide little value to them. The key, McClain stresses, is to determine whether you need to meet the interests of all parties involved or just a subset.
To provide the most value, metrics need to be limited to the most essential knowledge for the business segment receiving them. Business users, for example, may require reports that detail monetary impact or other broadly targeted items, while IT users may benefit from more specific, technical data in order to find inconsistencies or areas needing further work.
It is possible to create a single metrics report with a business view, an IT view, and even an SAP implementation view, says McClain. Such reports can be of high value to multiple parties.
However, McClain warns against offering too much in metrics reports. He recommends polling stakeholders to determine how often they want reports. Stakeholders or users, for instance, may not need the frequent data that IT users would need.
“You can easily inundate your users with too much information,” he says.
Know What to Measure
Although customized reports are a good way to please many groups, McClain notes the impracticality of trying measure everything. Time and money are wasted calculating and monitoring metrics that don’t matter much. McClain recently worked on an implementation in which the company had previously used a particular data quality metric. The implementation, however, made the metric obsolete because of changes to how the data was captured. As such, the rationale behind the data quality metrics must be reviewed periodically in order to keep the metrics relevant and timely, he says.
Similarly, McClain says you should focus your data quality metrics on areas of importance to the organization. It may not be necessary to measure the quality of all of the attributes. Instead, a metrics report should be limited to the high-risk, high-value attributes that the audience will really care about. This is especially important when the number of attributes is so high that the report would become cluttered and confusing to your audience.
Data quality metric characteristics can be classified as either rule-based or judgment-based. Rule-based metrics are simpler and more frequent because the characteristics are measured in a pass/fail style. For example, if a rule states that an item quantity must be greater than zero and a negative number comes back, that entry is easily identified as an exception. Judgment-based characteristics are qualitative, requiring attention to determine exceptions.
See the sidebar titled “Types of Data Quality Metrics” for a closer look at several styles of metrics and their uses.
Types of Data Quality Metrics
There are several different types of data quality metrics you can use to report on your data, according to Jason McClain of Deloitte & Touche LLP:
- Rule-level exceptions — These metrics focus on individual attributes within a data set that aren’t meeting a rule. This is a very detailed way to determine specific problem attributes
- Record-level exceptions — These metrics are helpful when looking for a high-level view of problem areas, because one record may have multiple rule-level exceptions (Table 1).

Figure 1
Tracking record-level exceptions can be more helpful than tracking rule-level exceptions
- Dimension-based metrics — In this type of report, metrics would be divided up so that an individual quality, such as accuracy, completeness, or timeliness, can be specifically measured. “This level of detail might be too much for a CFO,” McClain says, “But it might be good for an interested business user to be able to say ‘Well, you know we’re really bad at completeness. If we don’t have some information when we create a record, we just move on.”
- Monetary metrics — This type of metric measures the monetary value of exceptions. This is a metric for business user reports because they may be more concerned with the monetary impact to the organization and less with the root cause of exceptions, says McClain.
Know When and How to Report
Knowing how frequently to report on data quality is key to ensuring users get the most benefit from the metrics. McClain suggests working with your audience to determine a schedule based on business needs and technological constraints.
A multitude of reporting solutions are available today (Figure 2). For example, online dashboards can be posted on a company-wide intranet. Offline-based reporting, such as Microsoft Excel documents, can be a good solution for simple reports — or as an interim solution while you figure out a permanent solution. Excel reports require little cost or user training, yet provide easy-to-view reports.
Whichever option you choose, be wary of trying to “build a Cadillac,” as McClain puts it. “Start simple. You can add functionality over time,” he says.

Figure 2
A matrix of technology options for collecting and distributing data quality metrics
Know Your Goals
One of the most-often overlooked elements of a data quality metric strategy is the establishment of target metrics. Only by creating these goals can you adequately assess how you are doing.
“When you’re looking at metrics, you really need to determine what’s good, what’s bad, and, most important, what is an acceptable goal versus a nice-to-have goal,” McClain says.
Though this can be the most difficult part to create, the importance of this stage should be kept in mind, he says. Without predetermined goals, five different people viewing the same report might come to five different conclusions about the results.
The goal-setting process is threefold. First, you must ensure your data metrics are tailored specifically for your business rather than using what may work for another organization.
“You really need to understand what’s acceptable and what’s attainable for you and use that to define how you met our metric or not,” McClain says.
Second, you must set target values so when a report comes back, everyone involved views it with the same success rate in mind. Discuss the inherent error rate with stakeholders first and together create the agreed-upon target values.
Finally, always keep in mind the point at which reducing the number of exceptions outweighs the benefit. It might be nice to have fewer than 5% of records contain exceptions, says McClain, but if it’s significantly more cost-effective to leave it at 10%, that might be a good compromise.
Making the Metrics Actionable
Creating metrics, identifying exceptions, and determining issues are only the first steps in the data governance process.
“A lot of organizations are comfortable with an implementation mindset in which you identify a data quality issue and then fix it in a legacy system,” McClain explains, “You’re not concerned with the cause of it, because once SAP is in place, many legacy system and legacy process issues often go away.”
However, McClain says an important aspect of a data quality metrics strategy is in the root cause analysis, because analyzing and addressing the cause of issues can strengthen your entire slate of business processes (Figure 3).

Figure 3
Automate metrics to spend more time working on corrective action
“Get your process automated so you’re not spending time creating your metrics,” he says. “Instead, spend as much of your time as possible on the value-add activities such as figuring out how a problem happened, and more importantly, how you are going to fix it in the future.”
Achieving that balance of less time spent creating metrics and more time spent in corrective actions is key to a lasting and useful data governance strategy.
Laura Casasanto
Laura Casasanto is a technical editor who served as the managing editor of SCM Expert and Project Expert.
You may contact the author at lauracasasanto@gmail.com.
If you have comments about this article or publication, or would like to submit an article idea, please contact the editor.