Manufacturing companies are under pressure to adopt new technologies in order to remain competitive. As the world of product development and manufacturing becomes more complex, organizations are searching for new ways to manage product information. Product lifecycle management (PLM) enables companies to manage their products through every stage of their life cycle. It remains one of the key technological elements to support product development. Modern PLM systems are actively expanding with a digital twin, digital thread, system engineering, and other product concepts and technologies. While these innovative concepts are critical and important, data management is the real Achilles heel of all established PLM system, which lead to their inefficiency and even failure to scale.
PLM and Managing Data
Data and data management is the most critical element of all PLM systems. However, if you look at all existing PLM platforms, you will see traditional data modeling and relational databases. The data management architecture of all these platforms goes back 25+ years and basically nothing more than relational database technologies with a fancy object modeler on top – PLM Modeler Software Wasteland.
I highly recommend you this article as it goes across very important topics of data management evaluation for PLM. For decades, PLM vendors and companies providing implementation services developed a huge amount of data stores and logical data storage technologies. Managing data is quickly becoming a huge pain when a manufacturing company is looking at how to organize information and apply it for efficient decision-making. At the same time, PLM vendors still live with the paradigm of a single source of truth that is actually a single database in all existing mature (legacy) PLM platforms.
Single databases empowered existing PLM platforms, but as we are moving into a post-monolithic PLM world, the question of how to create a better and more efficient data management architecture is becoming louder. What was good 20 years ago hardly scales to the needs of modern manufacturing companies from both standpoints of product complexity and distribution of information. It is not feasible to put all data in a single SQL database – product information lives in multiple independent data stores. At the same time, product development and lifecycle go across multiple companies and geographies. What data management architecture can provide a solution for such problems?
Modern Data Management Stack
In my earlier article, I was sharing my perspective on the modern data management stack. Check this out. One of the key elements and concepts of modern data management architecture is the usage of multiple data stores (databases) tuned for specific tasks or domains (aka polyglot persistence). The efficiency of these databases (eg. NoSQL, Graph, etc.) is much higher than a “universal” SQL databases and a combination of specific data management tools with microservices architecture gives a foundation for a modern data management approach that can be used in the next generation of PLM services and platforms.
Best practices of modern data management technologies use granular data modeling to define data services that can scale and have a specific application focus (e.g. managing documents, managing relationships, change processes, and others. Each of these data services can be highly flexible and configurable and at the same time, can be used together to build a semantic data model for an entire product lifecycle management.
Building a semantic federation layer is an approach that I can see adopted by multiple enterprise manufacturing organizations to answer the question of how to manage complexity and, at the same time, scale their data management initiatives. Modern database technology and architectures play key role in this development.
PLM Data Management Web Services
In order to take full advantage of PLM, though, companies also need to implement polyglot persistence and microservices architectures. In the beginning, it can be a difficult challenge, because, for the last decades, companies were thinking about single PLM platforms and not a granular set of data management services. However, thinking about PLM data management web services is an approach that can scale and help IT organizations to bring a resilient data management strategy to solve problems of digital transformation for large and small industrial companies.
Each data management web service can be purposely built to perform specific functionality (eg. impact analysis, BOM management, change approval, collaborative review, etc), be open, and support an easy way to recombine the data managed by this service with other systems and similar services. Such multiple data storage technologies create a distributed foundation of modern PLM architecture replacing a single RDBMS paradigm used for legacy PLM platforms.
What is my conclusion?
There is a growing need to find a better data management architecture to scale up the PLM efforts of large industrial enterprises. Instead of a central single RDBMS, agile, flexible, and scalable PLM data management multiple services (web services) can provide a new foundation for the post-monolithic PLM world. It is not about a single data store (RDBMS), but a combination of modern databases (eg. NoSQL, Graph, Search, etc.) combined with traditional RDBM decoupled to the level web services with managed data consistency and data integrity will play a key role. These web services and underlining data stores will form a future digital thread. These data services will partition data in a logical way giving access directly to a specific set of information and organizing business solutions using low-code solutions. These services will be a foundation of the new granular and open data architecture of PLM systems. Just my thoughts…
Disclaimer: I’m co-founder and CEO of OpenBOM developing a digital cloud-native PDM & PLM platform that manages product data and connects manufacturers, construction companies, and their supply chain networks. My opinion can be unintentionally biased.