A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

PLM Evolution: Single Source of Truth, and Eventual Consistency

PLM Evolution: Single Source of Truth, and Eventual Consistency
Oleg
Oleg
18 March, 2025 | 5 min for reading

In some of my recent articles, I discussed the transformation of one of the main principles of PLM development that was around since the beginning of PLM vision and system architecture – Single Source of Truth (SSOT). Check some of my earlier articles:

Navigating the Evolution of Single Source of Truth

Rethinking Change Management: Collaborative Workspace Technical Architecture

One of my readers recently posed an insightful question: if you have a single point of change, doesn’t it inherently become the only reliable source of truth? This question is particularly relevant today as PLM moves toward a distributed systems future. Understanding this shift requires a closer look at traditional PLM architectures, the challenges of modern distributed data management, and the implications of concepts like eventual consistency and the CAP theorem.

Before moving forward, I wanted to remind CAP Theorem Trade-offs.

CAP Theorem Trade-offs

Let me start by reminding the main principles of the CAP theorem. In a distributed system, it is impossible to simultaneously guarantee consistency, availability, and partition tolerance. Different system models prioritize different trade-offs:
CP (Consistency + Partition Tolerance): Ensures data consistency across nodes, even if some nodes are unreachable. However, it sacrifices availability, meaning some requests may be rejected during network failures.
AP (Availability + Partition Tolerance): Ensures the system remains operational despite network failures. This comes at the cost of consistency, as some nodes may return stale or divergent data.
CA (Consistency + Availability) – Theoretical Only: Guarantees strong consistency and high availability but sacrifices partition tolerance. This model is not practical for distributed systems because a network partition would cause the system to fail.

The Historical Perspective: SQL-Centric PLM

When PLM systems first emerged, consolidating all data into a single SQL database was a logical approach. It provided a centralized, structured way to manage product information, ensuring data integrity and consistency within an organization. Most traditional PLM platforms still operate on this SQL-based architecture, treating a single database as the “Single Source of Truth” (SSOT). The fundamental assumption was that all changes would be made within this monolithic structure, ensuring absolute consistency and traceability.

However, this model was designed for a different era—when organizations were smaller, data volumes were more manageable, and global collaboration was limited. As businesses scale and operate in increasingly complex environments, the limitations of this approach have become evident. It doesn’t mean that SQL database won’t be used at all, but the architecture of PLM systems will be shifted and evolving.

The Shift Toward Distributed PLM Systems

Today, we live in a world of large-scale, distributed organizations that generate and consume vast amounts of data across multiple platforms. The nature of product development and manufacturing now requires real-time collaboration across various locations, systems, and stakeholders. This reality necessitates a fundamental change in how PLM architectures are designed.

Distributed systems prioritize availability, scalability, and resilience—qualities that are often at odds with the traditional PLM model of a single SQL database. As a result, modern PLM platforms are embracing new architectural principles, including:

  • Polyglot Persistence: Using multiple database technologies (SQL, NoSQL, GraphDB) to optimize different types of data storage and retrieval.
  • Microservices and APIs: Enabling modular, loosely coupled services to manage different aspects of product data.
  • Eventual Consistency: Allowing data to propagate across systems asynchronously, ensuring high availability while tolerating temporary inconsistencies.

The Role of Eventual Consistency in Modern PLM

Eventual consistency is a widely adopted approach in cloud computing, NoSQL databases, and large-scale web applications. It ensures that while data may not be immediately synchronized across all systems, it will eventually converge to a consistent state. This principle allows distributed PLM platforms to prioritize system availability and performance while still ensuring reliable data management.

However, this raises a fundamental conflict in traditional PLM thinking: if PLM is expected to be the Single Source of Truth, how do we reconcile this with a distributed system that permits temporary inconsistencies? The answer lies in distinguishing between Single Source of Truth (SSOT) and Single Source of Change (SSOC).

SSOT vs. SSOC: The Key Distinction

Traditional PLM systems were built on the assumption that SSOT means having a single database where all product data is stored and modified. However, in a distributed environment, this assumption no longer holds. Instead, modern PLM architectures should focus on Single Source of Change (SSOC)—ensuring that changes originate from a controlled and authoritative source, even if the data itself is distributed.

For example, a cloud-native PLM system may allow different services to store and retrieve product data independently, but changes to critical product information (e.g., CAD models, BOMs, or compliance data) should be managed through well-defined workflows, version control, and event-driven synchronization mechanisms. This approach ensures traceability and control while embracing the realities of distributed systems.

The Future of PLM: Rethinking Core Principles

Given the challenges and opportunities presented by distributed architectures, PLM vendors and practitioners must rethink fundamental aspects of PLM technology, including:

  • Collaboration Models: Moving beyond file-based sharing to data-driven, real-time collaboration across multiple systems.
  • Change Management: Implementing robust mechanisms for managing updates, conflicts, and approvals in a distributed environment.
  • Revision Control: Ensuring that different versions of product data are managed effectively, even when stored across various platforms.

What is my conclusion?

It is becoming increasingly clear that managing a digital thread in a single SQL database using a 1990s-era PLM architecture is impractical. Instead, Single Source of Change is emerging as the dominant model for modern PLM applications, allowing for flexibility, scalability, and resilience. However, achieving this requires a shift in mindset—from monolithic, tightly controlled databases to distributed, event-driven, and API-first architectures.

Just my thoughts… What is your take on this transformation? Let’s discuss!

Best, Oleg

Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative services including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased

Recent Posts

Also on BeyondPLM

4 6
21 March, 2019

I’m coming to SuiteWorlds 2019 in a few weeks. I visited it last year and I liked it. If you missed my...

22 July, 2019

In the past, I’ve made some writing about ENOVIA and 3DEXPERIENCE. Some of them raised a lot of questions and...

7 September, 2017

PLM business has a clear potential to grow. The last two weeks demonstrated that some investors have strong believe in...

17 January, 2025

Processes are the frameworks we create to make work efficient, minimizing errors and ensuring reliability. Product development process needs to...

23 June, 2009

One of the ambitious goals PLM puts in front of strategists, implementors and developers is to manage a product lifecycle...

12 January, 2018

It’s a new year outside. The time is running fast and we are moving towards 2018 with new questions about...

14 April, 2010

A couple of weeks ago I had healthy debates about PLM and Innovation with Jim Brown of TechClarity. If you...

16 October, 2013

Social PLM is not cool any more. The article Why is Social PLM DOA? from PLMJim popped up in my twitter...

20 June, 2016

Cloud applications are getting better these days. However, most of existing mainstream engineering applications are still using files to store...

Blogroll

To the top