Data has become a hot topic in many industries, and it is often referred to as an asset of an organization. We agree that data is most definitely an asset when it is understood and the value of that asset is known. Much like real assets of an organization that has the “Office of the CFO” to manage the balance, utilization and performance, data will also have an office to manage the balance, utilization and performance — the office of data. In 2014 we will see the introduction and growth in what Data Clairvoyance refers to as the next-generation-CDO (Chief Data Officer). The CDO role will become less IT-centric than his or her predecessors, and will be laser focused on answering organizational problems that have yet to be answered — whether it be growth, cost, new financial strategies or customer experience. This article will layout the structural components for the Office of Data, as well as a high-level perspective on how this office should function in the context of the greater organization.
Goals of the next generation CDO
While this may sound oversimplified, there is actually a lot of challenges behind making it a reality. The simple version of the goal for the CDO is to manage the data ecosystem well. The more elaborate detailed description is to create an intangible asset class within the firm, that is derived on the organizational data, the ecosystem surrounding that data and the derivations that are gleaned from that data. An intangible asset class infers that the future and present values of the class should be understood using an acceptable and repeatable valuation method. While we could spend an entire article discussing valuation techniques for data as an asset, we will reserve that for future publications. To support the content of this article, we will point out that a good valuation method should have concrete and quantitative based outcomes that enables the organization to isolate strengths and weaknesses of individual business data elements with respect to the many components required to achieve value (i.e. data quality, integrity, correlation to financial or risk indicators, etc.) . For this article we will focus primarily on the journey that data takes, and the group of individuals in the firm that will be responsible for that journey.
Data becomes incredibly valuable to an organization when it can be used for statistical and predictive modeling to find abstractions, trends and patterns that aren’t easily observed by your competitors or customers. For data to be useful in statistical and predictive modeling it must be transparent, meaning the organization must have the ability to see how it was created, moved, changed, etc. It must also be trusted at specific points of it’s lifecycle by a large portion of the believable subject matter experts associated with a given type or category of data. Finally data must have a common structure and purpose, to ensure that it is being used across the organization as it is intended to be used. It is important to note that statistical modeling, which is the underlying foundation to almost all predictive models, cannot be successful on data that has no structure — the early steps in using unstructured data is to structure into a experimental framework. When these simple aspects above are satisfied, the value that can be generated from an organization’s data is substantial. A great case study on this is Netflix and their investment into House of Cards, which we posted in a previous blog.
The Data Ecosystem
As we have written about and explained in many of our previous publications, data is the lowest common denominator across people, process and technology — all three require data to function in the new work age, which creates an ecosystem around data. To delve into this comparison a little, the defintion of an ecosystem is “is a community of living organisms (plants, animals and microbes) in conjunction with the nonliving components of their environment (things like air, water and mineral soil), interacting as a system”, Tansley (1934). Using that definition, data is a dynamic thing that has a lifecycle associated to it. As data interacts with people, technology and processes it changes to create new data or augmented data. Although ecosystem has become a cliche word to describe almost anything these days, in this specific analogy there are a lot of parallels that make it a good comparison as to how data should be treated. Just as we think of ecosystems in nature, the external and internal environmental factors are what cause things within an ecosystem to morph and evolve over time. There is a natural point of leverage that any organization has by which focus can drive evolutionary change within the ecosystem – both good and bad changes! A very simple example using our analogy above is that in any forest there are water sources, and if the water sources are drastically polluted or removed the entire ecosystem surrounding that source would change for the worse. This occurs because almost every aspect of that ecosystem uses H20 or some derivative. This very similar reality exists with data. Over time data becomes polluted naturally if there aren’t appropriate investments to keep it whole and clean. If we bring this discussion back to the organization, this decaying or chaotic ecosystem has substantial costs associated that eat away at the organization’s ability to sustain or create competitive advantage. As organizations grow in number of employees the propensity to create data chaos increases, and the ecosystem becomes lethargic and difficult to maneuver. In the next section we will discuss the economics of data and how the ecosystem can be improved as a mechanism for improving the organization’s economic value.
The economics of data
As we deleveraging activities occurring in the US and abroad the certainty associated with inflation or deflation are unknown. There are some superficial and obvious things that can be observed in that statement, in that if the remaining deleveraging activities aren’t handled well and the US sees periods of deflation it presents a significant opportunity cost for the organizations where available cash assets could be used to purchase non-cash assets at deep discounts, but can’t due to cash being constrained by an unbalanced organizational ecosystem that has a hefty cost structure associated to it. This scenario would significantly impair an organizations ability to take advantage of a “once every 100 years” type of opportunity — a la G.E., General Foods, GMC, Sears & Roebucks Company all did in the 30’s and 40’s. Cost is certainly one aspect that must be considered; however, it is not the only aspect to consider.
A more important and second order of thinking is the future value of the organization if the organization’s data capabilities are not enhanced today. The growth of data being generated and consumed within companies is on an exponential growth curve. If we make the very safe assumption that the cost of data will also follow that exponential pattern if intervention doesn’t occur. As with any cost curve there are opportunities to adjust the shape of the curve. If these inflection points of the cost curve are not ceased it becomes extremely difficult to adjust the ecosystem fast enough to keep up with the exponential nature associated to that curve as it often requires investments into transforming people, process, technology and data all at once – the most extreme level of organizational change! All of that is a complicated way to say that data will become the thing that causes an organization to choke if it is not dealt with today. The symptoms of “organizational choking” is observed in regulatory observations related to data, customer experience impacting revenue, redundant efforts to maintain and manage data, misstatements to the market and shareholders based on faulty data and the list goes on.
The final point to make about the economics of data is one of hope and future. Imagine if we had the ability to quantify data asset valuation from a GAAP approved perspective. To assign a financial value for this asset class would give an organization a tool to leverage against and use in the cost-of-capital and financial ratio decision processes. It could be used to procure new funds to expand the organization’s overall value, much like McDonald’s did when they introduced the “Golden Arches” as an intangible asset that had real dollars associated to it.
We need stitches not band-aids
Given all the talk around data, it is not surprising at the number of firms trying to take advantage of data to increase their own financial value, which is exactly how the market is supposed to work. However, some of the messages being generated from these firms are completely counter to actually improving the landscape regarding data. Over the past several years we have observed a substantial push to “governance of data” being the solution, and while we completely agree that governance is a critical component it is not the solution to all data problems. We’ve also observe the insurgence of architecture as key component in the technology space, and again while it is a key component it is not the only component of importance. These types of solutions become band-aids that over time will be “just another support function” that is so far removed from the revenue generating activities of the organization that it is no longer justifiable — just another cost. While organizations absolutely need to address data, and they need to start doing it now, they also need to be vigilant not to continue the proliferation of “support functions” that are scattered all over the company. The office of data should pull these crucial aspects of data together to create a single group that are all accountable for the same thing – to manage the data ecosystem well. This group should be collectively focused on embedding the capabilities required to improve data into each and every function that touches or consumes data. By embedding these data capabilities into all functions of significance it will increase the probability of influencing all of the key components associated with the financial statements — revenue, cost, tax, interest, etc.
The crucial components that need to be collected and formed are provide below with a high-level framework by which they operate:
Objective of this group: To manage the delivery of capabilities that will enable the organization to guide the purpose, meaning and use of data to what it is intended. This group should contain business leaders, data experts, industry experts, data/information stewards, operational data custodians/hygienists, and data council members/sponsors.
[box type=”download”] Metadata Management [/box]
[box type=”download”] Reference Data Management [/box]
[box type=”download”] Master Data Management[/box]
Objective of this group: To leverage the various structured and unstructured data assets to generate predictive models and visual analytics for relationships, observations and abstractions that are not easily identifiable across and between subject areas . This group should contain data scientists, data miners, data analysts, and statisticians. Key deliverables and activities of this group are:
Objective of this group: Business Process Improvement (BPI) is leveraged in capturing, re-engineering, optimization or general improvements to business processes. This group should contain master facilitators that have good understanding of the organization’s business model, and industry trends. Key deliverables and activities of this group are:
Objective of this group: To develop solutions to sustain the data quality and data integrity focused deliverables of Data Governance, as well as perform stress testing and audits of the entire governance system to ensure it is functioning as it is intended to function. This group should contain data analysts, data quality specialists, data quality developers (ETL) and various subject matter experts.
[box type=”download”] Data Quality Rules & Logic [/box]
[box type=”download”] Security and Controls [/box]
[box type=”download”] Audit Procedures and Tests[/box]
Objective of this group: To create a technical architecture / data ecosystem that enables the other data functions to optimize their process. This group should contain data architects, data modelers and business analysts.
[box type=”download”] Data Models [/box]
[box type=”download”] Data Access Controls [/box]
[box type=”download”] Data Delivery Mechanisms[/box]
Objective of this group: To build and execute plans that brings Data Governance, Data Quality and Architecture deliverables to reality – in a way that is sustainable and manageable in the future. This group should contain developers, software engineers, system/database administrators, program/project management, testing analysts and various subject matter experts.
[box type=”download”] Project and Execution Plans [/box]
While this article is certainly not all encompassing of the things required to improve the value of your data, it should provide relevant content to begin your journey. This field will be one of the most significant drivers to innovation over the next ten years, and we believe that the organizations that focus on establishing the Office of Data now will yield a substantial competitive advantage in the future. If you need help in understanding where your organization is on the journey to data as an asset, please contact our team to discuss the many services we offer in this new business landscape that is data.
[button link=”http://dataclairvoyance.com/contact-us/” color=”silver” newwindow=”yes”] Contact our data experts[/button]
Tansley (1934); Molles (1999), p. 482; Chapin et al. (2002), p. 380; Schulze et al. (2005); p. 400; Gurevitch et al. (2006), p. 522; Smith & Smith 2012, p. G-5