• X

    Latest News

    Dark, Light, Good, Bad: A More Accurate View of Data as an Asset

    Several years ago, Gartner introduced the concept of dark data — which I continue to find a valuable insight. Their website defines dark data this way:

    “… the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships and direct monetizing).”

    The similarity to the concept of dark matter in astrophysics is no accident. In fact, Gartner’s description also makes the point that like dark matter in space, dark data often accounts for large amounts of an organization’s overall data holdings — even though it is invisible, and essentially ignored. Once you accept this view of data, it becomes harder and harder to hear business leaders make blanket statements about how valuable their data is — or talk in broad terms about their data as an asset.

    So, rather than thinking of data as an asset, it’s more accurate to see it as a commodity because it can have value, be a liability, or be worth nothing at all.

    Wake up and smell the data?

    To illustrate my point, let me make an analogy between data and another commodity: coffee beans. Imagine your organization is in the specialty coffee business. Over time, you’ve acquired large reserves of coffee beans with the aim of turning them into profitable products. However, you’ve bought the beans from five different suppliers, each of which has facilities all over the world.

    To make matters even more complicated, you’ve acquired the beans over a period of several months — and some of the beans are where you’d expect to find them, while others are basically lost in forgotten bins. Most importantly, some of the beans will create a great tasting product, some will add little if any flavor, and some could actually be so bad they could lead to lost customers — or even lawsuits.

    If we just replace the words “coffee beans” with “data,” the same could be said of the way your organization acquires and stores legacy data. You may have acquired data over the course of 30 years from a wide variety of sources and processes. But your business has changed a great deal over those years, and so have the sources of your data commodities.

    In such a scenario, it’s more than likely that your data community doesn’t have the “tribal knowledge” to pinpoint precisely how you acquired any particular set of data, or even where it’s all stored. But you still need to appreciate the fact that the overall data has potential value — it contains data that is good, bad, and indifferent.

    The two major approaches to data valuation

    In general, there are two approaches to gaining this insight into data value.

    The first is qualitative in nature, and although the insights it provides are somewhat narrow, it’s relatively easy and inexpensive to acquire. In essence, it’s a process for visualizing how data relates to your organization’s underlying business processes. It examines the many data systems, inputs, products, customers, and outputs involved, and how all of those details impact your business performance.

    The quantitative approach is more involved and requires a longer term commitment. It begins by gathering and analyzing your organization’s metadata through data governance operations and metadata acquisition and management. This is followed by a data quality process that quantifies the quality of each data element, and how it functions within your organization’s data supply chain. Next, a data intelligence process quantifies how well the organization understands each given data element, and examines the data elements’ health and potential risk. It all leads to the development of a data supply and demand volatility index, a powerful tool that allows leadership to see, at a granular level, which data assets are most useful and have the greatest potential impact on the bottom line.

    Among the key insights the quantitative approach provides is a new, detailed understanding of the “supply and demand” dynamic of an organization’s data. This approach correlates specific data elements with key performance indicators (KPIs) used by the organization to track success and make critical decisions. The correlation between data and performance is the insight that reveals how specific data helps or hurts your business. (Note: For a more detailed explanation of these approaches, check out our free guide, Using Data Valuation to Unlock Business Insights.)

    What’s in it for your organization

    The ROI you can realize from valuating your organization’s data as an asset — both the data your people use frequently as well as your vast amounts of unused dark data — can be enormous. For example, doing so can allow you to leverage machine learning and automation to optimize your operational model. Alternatively, it can help you identify data that’s not really worth anything — yet still requires significant resources to manage. Equally important, it helps you to put controls on data that is at high risk for breach, preventing legal liability, potentially disastrous publicity, and associated costs.


    Submit a Comment

    Your email address will not be published. Required fields are marked *