A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

PLM Single Source of Truth: Why It Was Never the Final Answer (And What Comes Next)

PLM Single Source of Truth: Why It Was Never the Final Answer (And What Comes Next)
Oleg
Oleg
28 March, 2026 | 18 min for reading

I have been writing about Single Source of Truth for a long time. If you have been following Beyond PLM, you know this is a thread I keep returning to, because the concept keeps evolving and the industry keeps misreading where it stands.

A couple of years ago I wrote about how SSOT evolved from files and folders all the way to Digital Twins, Digital Threads, and Product Knowledge Graphs. The argument there was that the idea of a single database controlling all product truth was already obsolete, and that the industry was moving toward more federated, connected models. Then in December I came back to the topic in the context of change management, arguing that what we actually need is not a single source of truth but a single source of change — a controlled point where decisions get made and propagated, even if the data itself lives in multiple places. And most recently I explored how eventual consistency is replacing the old monolithic SSOT model as distributed PLM architectures become the new reality.

Each of those articles was describing the same underlying shift from a different angle. SSOT was never the final destination. It was a station. An important one. One the industry needed to pass through. But not where the journey ends.

So the question I want to ask today is: what is the next station?

Vintage train at a station labeled “Single Source of Truth,” symbolizing the evolution of PLM and digital thread in product data management

I think we have been so focused on where SSOT fell short — the fragmented implementations, the system ownership battles, the architectural limitations — that we missed the deeper reason it was never going to be enough. It was not just that the goal was technically hard to achieve. It was that the goal was aimed at the wrong thing. The industry spent decades trying to consolidate and connect product data, and by most measures it succeeded, at least partially. We have structured BOMs in PLM. We have digital threads linking systems. We have distributed architectures managing consistency across domains.

And yet, when someone asks why a product is the way it is, the system is still silent. That is what this article is about. Not the failure of SSOT. But the gap it left behind, and what needs to fill it.

Why PLM’s Single Source of Truth Was Built on the Wrong Assumption

The vision made sense on a whiteboard. You have a product. That product has a structure. If everyone works from the same structure, you eliminate errors, reduce rework, and move faster. The logic was clean.

But here is the thing that the whiteboard missed: a product does not have one structure. It has many. And those structures have always existed, whether the software vendors acknowledged it or not.

Engineering maintains an engineering BOM. Manufacturing maintains a manufacturing BOM. Procurement works from a sourcing BOM with approved vendors and alternatives. Service works from as-built records that reflect what actually shipped, not what was originally designed. Each of these is a real structure, maintained by real people, serving a real purpose. In smaller companies, they live in Excel files. In larger ones, they get split across PLM, ERP, MES, and whatever else the company has accumulated over the years.

This is not a failure of discipline. It is the natural shape of how products actually exist across an organization.

The problem started when the software industry looked at this landscape and decided the right response was consolidation. Pick one system. Make it the authority. Force everything else to defer to it. The pitch was irresistible: one version of the truth, no conflicts, no confusion.

What happened in practice was something nobody put in the sales deck. Systems started fighting for data ownership.

PLM vendors argued that engineering was the origin of all product knowledge, so PLM should be the master. ERP vendors argued that nothing matters until it affects procurement and production, so ERP should be the master. Each camp had a legitimate point, which is precisely why the fight never ended. Companies spent years, sometimes decades, negotiating which system owned which data fields, building fragile integrations to keep them synchronized, and managing the political fallout when the boundary lines shifted after a reorg or a new executive with a different vendor preference.

The goal of winning that fight became a distraction from something more important: actually serving the people who needed the data to do their work.

Because the real issue was never which system owned the structure. The real issue was that a product looks completely different depending on who is looking at it, and those different perspectives are not errors to be corrected. They are legitimate.

Engineering sees a design intent. A set of parts chosen to meet functional requirements within constraints. Their version of the product is a set of decisions made under uncertainty, with tradeoffs embedded in every choice. Manufacturing sees the same product filtered through what is actually buildable. Some parts get substituted. Assembly sequences change. Process constraints reshape what the design team assumed was fixed. Procurement sees a supplier network, vendor relationships, lead times, pricing negotiations, alternative sources. Service sees a population of physical units in the field, each one slightly different depending on when it was built and what modifications were made along the way.

These are not duplicate records that need to be collapsed into one. They are different perspectives on the same product, each one valid within its domain, each one necessary for the work that domain has to do.

The single source of truth was not a realistic goal. It was a simplification that made software easier to sell and harder to actually use. And the years spent fighting over data ownership were years not spent building something better.

The ownership question has a practical dimension too, one I explored directly on the OpenBOM blog: Who Owns Product Knowledge? The Hidden Problem with PLM and Data.

Where PLM Data Ownership Breaks Down: Engineering vs. Manufacturing vs. ERP

Let me make this concrete, because I think the abstract version lets people off the hook too easily.

In smaller companies, the fragmentation is obvious and visible. Engineering has a BOM in CAD or Excel. Manufacturing has their own spreadsheet with production adjustments. Procurement has another file tracking approved vendors and substitutions. Nobody planned it this way. It just evolved because each team needed to work with the data in a way that fit their actual job.

The connection between these versions lives in email threads, Slack conversations, and weekly meetings where someone says “by the way, we changed that part last month.” When something goes wrong, the investigation usually reveals that a change was communicated, just not in any system. It lived in a message that someone read and then forgot to act on, or acted on without updating the record.

I have talked to hundreds of companies in this situation. The universal experience is that people do not go to the system when they have a question. They ask each other. The system tells them what something is. Other people tell them why.

In larger enterprises, the story is the same problem wearing a different costume. They invested in PLM. They built controlled revision processes. They implemented change orders and approval workflows. On paper, everything is connected. In practice, what these systems capture is status.

Version A became version B. Change order 4721 was approved. The part moved from “in review” to “released.” The workflow completed. But go back and ask why version B is different from version A, and the system gives you nothing. Look at change order 4721 and ask what alternatives were considered before this decision was made, and the system is silent. Ask what constraint drove the supplier switch that happened two years ago, and the only people who might remember are the ones who were in the room at the time, assuming they still work there.

Status is not memory. A record of what happened is not the same as an understanding of why.

If you want to see how this plays out at the BOM level specifically, I wrote about it in more detail on the OpenBOM blog: Why Your BOM Doesn’t Tell You What You Need to Know.

The Digital Thread Problem: Why Connecting Systems Isn’t Enough

Over the last decade, the industry moved from the single source of truth narrative to a new one: the Digital Thread. The promise evolved. Instead of one system, we would connect all the systems. CAD to PLM. PLM to ERP. ERP to MES. Data would flow across the product lifecycle, and teams would have visibility into information that used to be trapped in silos.

I think the Digital Thread was a genuine step forward. I want to be clear about that. Connecting systems matters. Reducing manual data transfer reduces errors. Making data accessible across organizational boundaries is legitimately valuable.

But connection is not the same as understanding, and I think the industry moved too quickly to celebrate connectivity as if it solved the underlying problem.

When you trace a Digital Thread from design to manufacturing to service, you are following data. You can see what part is installed where. You can see which revision is current. You can see which supplier is certified. This is useful. It is better than not having it.

What you cannot see is the reasoning underneath the data. You cannot see that this particular part was chosen because a better option had a 14-week lead time during a period when the program was under schedule pressure. You cannot see that the supplier was changed not because of quality issues but because of a relationship that someone above the project wanted to preserve. You cannot see that the alternative design was rejected in a conversation that happened in a hallway after a formal design review, and that rejection shaped everything that came after.

The Digital Thread connects the dots. It does not tell you why the dots are where they are.

Let me read both articles first.Good, I have both articles. Here is the revised section:

The Missing Layer in PLM: Context Graphs and Decision History

This is where I want to introduce something I have been thinking about for a while, and something that I think is becoming increasingly hard to ignore.

When the industry just started to work on PLM ideas, we have been framing the problem as a data problem. Get the data in one place. Keep it current. Make it accessible. Structure it correctly. Manage revisions. Organize “Where used”.  And when that felt insufficient, we reframed it as a connection problem. Connect the data across systems. Build integrations. Create visibility. It was the Digital Thread idea. 

Both data management and connectivity problems are real and need to be solved. But I think the problems of engineering and manufacturing organizations in 2026 cannot be solved just by organization of the data and connection. The real problem is a context problem.

The relationships that actually matter in product development are not part-to-part relationships or system-to-system integrations. They are decision-to-decision relationships. The reason this fastener is this size is connected to a decision made about the housing tolerance two years ago. The reason this assembly sequence exists is connected to a quality issue that occurred in the third production run. The reason the software version and the hardware version are coupled the way they are is connected to a field failure that happened during early deployment.

These are context relationships. And they exist nowhere in any system I have ever seen. They live in the heads of the engineers and program managers and manufacturing leads who were there when the decisions were made. And when those people leave, which they do, the context goes with them.

Every senior engineer in manufacturing knows this experience. A new team member asks why something is done a certain way, and the honest answer is “because of something that happened years ago that I no longer fully remember.” Or worse, nobody in the current team knows at all, and the constraint that exists for a very good reason gets ignored until the problem it was preventing comes back.

I wrote about this directly in my article on Context Graphs: PLM Beyond Systems of Record. The argument there was that PLM systems became very good at remembering what changed and when it changed, but they never learned how to remember why decisions were made. The reasoning, the debate, the alternatives considered and rejected, the tradeoffs that felt acceptable under the constraints of the moment — all of that lives outside the system. It lives in Excel, in email threads, in meetings, in chat. When the decision is finally made, the outcome gets imported back into PLM, recorded, versioned, approved. The thinking that led there is gone.

This framing also connects to something broader that caught my attention late last year. The team at Foundation Capital published a piece arguing that context graphs represent AI’s next trillion-dollar opportunity — not in PLM specifically, but across enterprise software. Their core distinction is one I think is exactly right: there is a difference between rules, which tell a system what should happen in general, and decision traces, which capture what happened in this specific case, under these specific constraints, with this specific exception approved by this specific person for this specific reason. Rules live in systems. Decision traces live nowhere, or they live in Slack threads and people’s heads, which is functionally the same thing.

What Foundation Capital describes for enterprise software broadly is the precise problem that engineering and manufacturing organizations have been living with for decades. PLM is a system of record for product states. It is not a system of record for product decisions. Every ECO in the system is a clean state transition. What it does not show is the messy human process that preceded it — the Excel markups, the email debates, the hallway conversations, the alternatives that almost made it. PLM captures outcomes. Context graphs would capture the reasoning that produced them.

This is not a people problem. You cannot solve it by writing better documentation or holding better meetings. It is a structural problem. We built systems that capture data and ignore context, and then we acted surprised when the context kept getting lost. The gap between what PLM knows and what the organization actually knows is not a data quality issue. It is an architectural one. And until we treat decision traces as first-class artifacts, the same knowledge will keep walking out the door.

How PLM Loses Engineering Knowledge Over Time

There is another dimension that I think we consistently underestimate, and that is time.

A product is not a snapshot. It is a sequence of decisions made over months or years, each one shaped by constraints and knowledge that existed at a specific moment. The current state of a product is the result of that entire sequence. But most PLM systems treat the current revision as the product, with history treated as an archive that nobody looks at.

And even that archive is incomplete, because a significant portion of the decision-making that shaped a product never made it into any system at all.

Here is a pattern I have seen countless times. An engineer exports a BOM from PLM into Excel. She shares it with the team over email. People add comments, suggest changes, debate alternatives. Someone marks up the spreadsheet. Replies go back and forth. Eventually the group converges on a decision. The engineer takes the agreed result and updates PLM. The new revision gets released. The change order is closed.

What PLM captured is the outcome. What it did not capture is everything that happened between the export and the import. The alternatives that were considered and rejected. The constraint that ruled out the first choice. The compromise that the team agreed to because of a schedule pressure that no longer exists. The concern that someone raised and everyone agreed to revisit later and then forgot about entirely.

The Excel file probably still exists somewhere, in someone’s downloads folder or a shared drive that nobody has organized in three years. The email thread is buried in inboxes. The reasoning is distributed across people’s memories, degrading a little more every month.

This is not an edge case. This is the normal workflow in most engineering organizations. PLM is the system of record for the result. The process that produced the result happened outside it, in the tools where people actually collaborate: spreadsheets, email, chat, meetings. And none of that process gets captured.

When a revision is released, the decision-making that led to it is effectively erased from the active record. You can go back and look at old change orders, if they were carefully documented, and try to reconstruct the reasoning. But in practice nobody does this, because it is slow and the information is incomplete and the people involved may not be available.

The result is that institutional knowledge about a product decays continuously, even in companies with sophisticated PLM implementations. Each major personnel transition, each platform migration, each reorg creates another opportunity for context to be permanently lost. And the loss is invisible until the moment someone asks a question that should have an answer and discovers that the answer walked out the door with someone who left two years ago.

We keep thinking of this as a knowledge management problem, something to solve with wikis or documentation requirements or better onboarding processes. But those approaches all require humans to voluntarily capture and maintain context in addition to their actual work. That is asking for something that does not scale and does not happen consistently, for the same reason that people do not update their expense reports the day they incur the expense. The intention is there. The follow-through is not.

The context needs to be captured structurally, as part of the work itself, not as a separate documentation activity layered on top of it after the fact. I went deeper on the specific mechanisms of this knowledge loss on the OpenBOM blog: Where Product Lifecycle Knowledge Gets Lost and Why It Matters.

Product Memory: The Next Evolution Beyond PLM and Digital Thread

Let me try to bring this together.

For thirty years, the manufacturing and engineering software industry worked on what it defined as a data problem. Create better structures. Implement better workflows. Build better integrations. And these investments produced real value. I am not dismissing them.

SSOT was a real achievement. Moving from scattered files and folders to controlled databases and revision processes represented genuine progress. The Digital Thread that followed was another step forward. Connecting systems, reducing manual transfer, creating visibility across the product lifecycle — these things matter.

But every time I talk to engineering and manufacturing teams today, even the ones with mature PLM implementations, they hit the same wall. Someone asks a question that should have an answer. Why is this part here? Why did we stop using that supplier? What changed between these two versions and why? And the system is silent. The answer is not in the data. The answer is not in the connections between the data. The answer lived in the reasoning that produced the data, and that reasoning was never captured anywhere.

We passed through the SSOT station. We passed through the Digital Thread station. Both moved us forward. But neither was the final destination.

The gap they left is not a data gap. We solved most of the data problem, imperfectly but substantially. It is not a connection gap either. The Digital Thread addressed most of that. The gap is a memory gap. The reasoning, the context, the history of decisions that explains why the product is what it is — that layer does not exist in any system I have seen. It lives in people’s heads, in buried email threads, in Excel files that capture results without capturing the conversations that produced them.

I have been calling this Product Memory. Not a product category yet. Not a specific system you can buy. A concept that names something real, something that has been missing from our industry’s mental model for a long time.

SSOT assumed that if you centralized the data, understanding would follow. It did not. The Digital Thread assumed that if you connected the data, context would emerge. It did not. Product Memory is the proposition that context and reasoning need to be first-class citizens in how we think about product lifecycle management. Not a feature of PLM. Not an add-on to the Digital Thread. The next station on a journey the industry has been on for forty years.

I do not think this is a small idea. I think it is the thing we missed. And I think we are just starting to figure that out.

What do you think? Is Product Memory the right frame for what is missing? I would be curious to hear how this gap shows up in your organization.

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights. 

Recent Posts

Also on BeyondPLM

4 6
25 July, 2010

Data management is boring. Data, Numbers, Tables… Even, if I completely disagree, I guess lots of people think so. Nevertheless, even...

11 February, 2010

Recent presentation on SolidWorks World 2010 about evolution of PLM drove me to think about what is the future of...

3 February, 2014

Dear Friends, This month marks the beginning of my 6 year of blogging about PLM and engineering software. Even my early...

17 January, 2017

Shifts in a rapidly changing manufacturing world and dramatic changes internet is bringing to digital economy might become a challenge...

19 July, 2012

The life around us is changing fast. Consumerization. BYOD. Cloud. Social. We are in the middle of the biggest technological...

1 September, 2019

One of the presentations during last ConX19 event last week in San-Diego, was from Craig Brown, former PLM leader of...

3 January, 2013

It is always interesting to take a look and see what posts drove the most attention of my readers in...

6 October, 2016

Cloud technologies and applications are trending. Just few years ago the most of cloud PLM debates were about “cloud vs....

30 July, 2009

There are numerous ways in which people changed way they communicate for the last decades. Email is definitely one of...

Blogroll

To the top