• X

    Latest News

    How Understanding Data Volatility and Metadata Management Can Lead to Greater Organizational Insight

    Data is a commodity — one that can be an asset, a liability or simply worth nothing at all. One of the best ways to leverage data as a commodity is to learn exactly how it relates to the overall health — financial and operational — of your organization.

    There are two major roads you can take to gain this insight. One is simpler and easier to implement, and the insights it delivers are qualitative in nature. The other is a more involved process and a longer term commitment — but the quantitative insights it delivers are also far more significant.

    The two approaches both focus on gaining insights into what data you have, where it resides, and who uses it and in what ways. But the quantitative approach goes much further, and uses metadata management as a way to uncover the correlation between specific data elements and the key performance indicators (KPIs) that your organization uses to measure success and make critical decisions. In fact, the processes I list below give an organization unprecedented insight into the “supply and demand” of data within the organization, a perspective that has powerful ramifications across the enterprise.

    Essential processes and capabilities in the quantitative approach

    In this post, I want to provide the big picture of the processes that are involved in the quantitative approach.

    The first phase involves understanding your data and how your data community creates, uses, modifies and otherwise interacts with it. The initial focus is on gathering and analyzing your metadata — that is, the data about your data. The core processes for achieving this perspective are data governance operations, which provide insights into the conceptual aspects of your metadata, and metadata management, which focuses on all the other aspects of metadata.

    A second broad phase is a data quality process focused on discovering, understanding, capturing and quantifying how each data element functions within your organization’s data supply chain. This process also quantifies the current health and quality of each data element, as seen by the most involved stakeholders in your data community. After this process has been completed, a data intelligence process quantifies the extent to which an organization understands each given data element, as well as the data element’s overall health (i.e., its combined data quality KPIs) and associated risk. It becomes data intelligence when one takes all of those quantified numbers and turn them into dynamic, explorable visual analysis.

    A third phase is to create a data volatility index — a measure of how specific data elements, or cohorts of data elements, change over time. The change is measured at multiple levels, beginning with the value in a single cell in a spreadsheet, then measuring the combined values associated with a given data element, and then measuring the values in larger and larger cohorts of data elements. This process presents some unique mathematical and engineering challenges, but my organization has developed the statistical methods needed to visualize each data element as a distinct three-dimensional shape, and track how it changes over time. By conducting advanced mathematical analysis on these trends, we can correlate changes in specific data elements with changes in the organization’s overall business KPIs, enabling us to forecast impacts to the organization’s Profit & Loss (P&L).

    Being able to make such forecasts also allows us to model various scenarios to answer such questions as: Would the P&L change if this data element had a higher quality or supply volatility? or What if we increased our organizational understanding and awareness of the data element? These answers in turn enable us to build future value models that can be discounted to present value, and are the foundational components to an empirical data valuation.

    It’s this insight that is the Holy Grail of data valuation. It provides a GAAP-level accounting discipline — that is, the same accounting standards that most organizations use to measure and report on their traditional balance sheet. It is essentially a data valuation forecast model that quantifies the value that specific data elements contribute to the organization by projecting unrealized gains and losses. As a result, business leaders can understand exactly where they can realize the greatest return in investment by improving data. They can also make far more accurate and strategic business forecasts, using the organization’s own data as input, of future unrealized gains and losses, based on projected changes to the underlying independent variables.

    The bottom line

    There is much to be gained from using qualitative analysis to answer the fundamental questions about your data, as well as quantitative analysis and effective metadata management to gain insight into how it changes and how those changes affect your overall business KPIs. For example, you can actually allow your business to derive far greater value from all its data. This means not only getting better value out of its existing data, but also data that is being evaluated in an acquisition, acquired from a third party, or will be created as an outcome of an ongoing technology initiative. It’s essentially a way to arrange all of your organization’s disparate data into discrete, well-defined buckets, with a dollar sign indicating the worth associated with each.

    These insights can also point the way to new sources of financial impact, including a better understanding of customers and their needs, and the ability to develop new products that better meet those needs. An even greater potential impact of the insight is that it can help you create a new layer of data, derived from your existing data, that you can sell to other organizations to provide them with insights to help their operations.


    Submit a Comment

    Your email address will not be published. Required fields are marked *