Digital Networks and the End of Single Source of Truth

Digital Networks and the End of Single Source of Truth

One of my ‘burning questions” for PLM in 2018 is about value of data. Check my article (here). What PLM company will provide data is a new oil? I stated it many times – data is the new oil and industries will be transformed. However, there is something missing. While all companies are talking about data-driven approach, data in the center, big data, machine learning and many other things, I cannot see the clear-cut winners in PLM industry that have utilized their data to transform the way they do business and destroy rivals. Every company wants to be data-first, but it’s unclear what firm delivers.

In the pre-digital age, data management platforms served one single function – to control data access and ensure right data is used by people. Although these systems not always succeeded and some of the deliveries were questionable, the goal was clear – to put bring right data to people at the right time. In simple words – give me right version of Part and Assembly and the job in almost done. These systems used as electronic vaults of data and some workflow systems around.

As company are moving towards their new digital horizons, many things are changing. First, we are swimming in the ocean of data and it is coming from everywhere. Customers, sales, services, engineering, manufacturing, contractors, suppliers… Companies are moving from linear traditional supply chain to a new type of so called “Digital Supply Networks”.

Networks are very much different from traditional siloed approach in data management. In the past, each step in the supply chain network was identified by its own system of record – engineering, manufacturing, sales, etc. PLM strategy in these days was set around the concept of single source of truth (SSoT). Some people called it Single Version of Truth. If you think, there is no difference between them, you might be wrong. One of my industry colleagues Lionel Grealou brings the following differentiation between them in his LinkedIn article – Single Source of Truth vs Single Version of Truth.

Single Source of Truth: A data storage principle to always source a particular piece of information from one place. Single Version of Truth: One view of data that everyone in a company agrees is the real, trusted number for some operating data.

The difference is interesting and can help us to move from old fashion siloed data management systems to a new modern approach. From what I can see, companies that specialized in data are already catching up such concepts.

Forbes article Single Version Of Truth: Why Your Company Must Speak The Same Data Language by Brent Dykes explains how important to establish single version (or view) on data that shared between people in the company. Here is my favorite passage:

The retailer’s leadership team decided they needed one version of truth to align their various strategic initiatives on the same metrics. Having one consistent view of the right metrics would also reduce the potential for each initiative leader to massage their team’s results. Consequently, the CEO mandated that all the information for the monthly meetings must be based on data that resided within its business intelligence platform (Domo). He outlawed data from random analytics tools as well as the creation of any supporting documents (slides, spreadsheets, reports). Instead, each initiative had its own real-time dashboard with a simple collection of charts and an agreed-upon set of leading and lagging indicators. With this new approach, they were able to get everyone speaking a common language as they evaluated the performance of their strategic initiatives, which fostered more data-driven conversations, greater collaboration and faster decision making. In addition, the leadership team found they were better aligned and more closely focused on achieving their strategic objectives.

When you introduce single version of truth, it is not about data accuracy first, but about agreement between people. I specially liked that point, it reminded me many PLM implementations in which people from different departments are trying to establish trust and communication, but failing to agree about data they want to share in PLM systems.This is a point where people say “don’t touch my BOM” or “don’t work my documents” are coming from.

Data networks are bringing a new reality in data management and communications. It is not possible anymore to isolate systems into silos and send Excel back and forth. The systems must be in the mode when data is linked between them and single version of truth is established based on links and information gleaned from these networks.

What is my conclusion? The era of single source of truth in PLM is coming to the end. It is not possible to establish single data source and store all data in a single database. It is too limited and it doesn’t scale from all standpoints. Data networks are coming to change it. Systems are becoming intertwined together. The value of data in these networks is increasing and it will lead to fundamental changes in business models that impact all players of manufacturing supply chain. Just my thoughts…

Best, Oleg

Want to learn more about PLM? Check out my new PLM Book website.

Disclaimer: I’m co-founder and CEO of OpenBOM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.



Share This Post

  • Pingback: GHZ Partners | IT Services, Software()

  • Lars Taxén

    There is a lot of talk nowadays about the importance of data and IoT. But aren’t we missing something? In the midst of the digital transformation, we tend to forget that one thing remains unchanged – our human abilities to make information and knowledge out of data. How do we design our information systems in such a way that we can make sense of all data generated? Is the traditional hierarchy of data as facts, leading to information and then knowledge of how act valid? Or is it so that data emerges only after we have information, and that information emerges only after we already have knowledge? Which ever way we adopt will have strong impacts on how our PLM systems should be designed. Maybe we should take a moment to reflect over how our human abilities are impacted by the data deluge before we dive into all the technical stuff? Just my thoughts…

  • vlna

    Lars, very relevant remark, I fully agree with you

  • beyondplm


    Thank you for your comments! Vood point! Always good to reflect on human ability to understand the data.

    Single company was able to reflect on data and design a single database to keep everything. It didn’t work for all of them and sometimes people disagreed. But 10-15 years ago, I’ve seen how companies intellectually defined single database to keep all data going. ERP vendors did tremendous job optimizing execution plans.

    Here is what changed and 2 points to think about – 1/ the amount of data in a value chain is going beyond human ability to understand it. A typical manufacturing organization is getting data from multiple sources which hardly can be organized into single database (source) of truth. 2/ The impact of network is huge. Contractors, suppliers, how to include the impact? Companies are working in a networks of data distributed between multiple companies and also online data sources.

    So, getting back to your proposal. Our human ability to understand complex data can be improved by computers’ ability to process data and “connect dots”.

    I can give you one example. In my company ( we are capable to analyze the global graph of multiple products. When number of products and companies are going up, human ability to understand this data without ability to connect “dots” using data analysis is questionable.

    What do you think?


  • Lars Taxén

    I believe we have to take a step back and think about the basic human conditions for doing product development and maintenance – “Beyond PLM” so to speak (good term!). I see this basis as the situation in which the work is carried out- marketing, development, production, after sales, or whatever. Let’s call these situations “work contexts”. Every such work context is unique in terms of motivation, what their work is about, tools they use, rules they adhere to, processes, and so on. In particular, there are skilled and knowledgeable participants, honed to carry out the work in the most efficient way. This is of course commonplace knowledge, which however to our detriment, tends to be ignored.

    What does this mean? Well, first of all, the data need to be relevant for the work context. A lot of data floating around in repositories are virtually useless in a particular work context, and should never be made visible above its ‘relevance horizon’. Only relevant data can be transformed into well-informed actions by participants.

    Second, each work context will nourish its own data; a ‘data silo’ so to say. Such silos are enablers of the work and we must abolish the thought once and for all that silos are bad and should be levelled out. What should be in focus is the balance between locally and globally relevant data in such silos.

    Third, we must do away with the term ‘truth’ when talking about data. There is no such thing, simply because data will never be ‘enough’ or ‘complete’. What matters is whether the data is relevant and useful for the work, and that is a matter of constant negotiations among participants.

    This does not mean that each work context necessarily will house all relevant data in its own repositories. Relevant data may be found in all sorts of places. One important support by PLM systems and other systems is then to indicate for each work context where such data may be found – connect “dots” as you say (if I understand you correctly). The converse is also true – each work context need to make visible such data which is relevant for other work contexts. And the whole set of work contexts in an organization must adhere to some common data, for example part numbers, revision rules, and so on. Finally, work contexts are always open-ended, never settled, and in constant change. Thus, the PLM system needs to be extremely easy to modify to keep up with changes.

    Indeed, there is a lot more to say about the human aspect of doing PLM work, but I guess this will do for now. In any case, I truly believe that the data deluge will quickly become impossible to manage unless we adhere to the fundamental conditions for human work as outlined above. Just my thoughts …

  • Pingback: Beyond PLM (Product Lifecycle Management) Blog Solving the data dilemma or how to extract more value from PLM? - Beyond PLM (Product Lifecycle Management) Blog()