An Argument for Data Quality
Is the quality of your data costing you millions of dollars in lost revenue or unnecessary exposure?
As organizations seek to take advantage and monetize their valuable data assets, the need to avoid common pitfalls of poor data quality decisions becomes more apparent. Too often, an organization will trust that the data governance and controls are in place to ensure high-quality data without thoroughly vetting the data for use in a decision-making process.
Consider the example presented by Guy Cuthbert. A manufacturing company invests millions of dollars in R&D dollars and countless months in product rollout and marketing to launch a new product. The product appears as if it will be a financial windfall for the company until a data quality problem throws a wrench in the works. The system that controls the assignment of Global Trade Identification Numbers (GTINs) is not clean, and the new product has been given the same GTIN as a formerly recalled product. The company ultimately decides to pull the affected products from the shelves rather than reprint the packaging and incur millions of dollars in additional costs.
Another example comes from the financial services industry, where a brokerage firm tracks intraday client exposure at a legal entity level leveraging an in-house client master. The client master contains a mapping of accounts to the legal entity to process trades coming in from the exchange. Unfortunately for the brokerage, the mapping is missing several accounts, causing exposure for the firm’s largest client to be under-reported and unaccounted for in the risk process. The inability to control and monitor a client’s intraday exposure not only leaves the brokerage open to financial loss and regulatory fines; it also carries with it devastating consequences, such as door closing losses, for clients.
To prevent data quality problems from occurring in the first place, establish data quality controls as the foundation for business processes. Data quality controls lead to less “data clean-up” down the line, enabling valuable resources to focus on value-add initiatives.
Contributed by Phil Liu.