When I started my work at PLM, everything was about the software. What software to develop and use? It was important to learn about what is the right database and programming language to build software. Much less focus was on how the application is storing the information and managing it moving forward. A growing number of enterprise software created data monsters. While these monsters are still ruling the enterprise software world, more and more companies are turning from applications to data.
Future Role of Data
The role of data is growing in PLM systems. Data is an abstract thing and only can become valuable when it is involved in processes, sharing, and aggregation. It creates a way to turn such data into a resource that can be used for a specific process and provide a specific outcome. For example – risk assessment of the component is valuable data that can be obtained from a system (or multiple systems) by applying a method to extract it, aggregate, contextualize and apply for a specific use case. For example, a bill of materials (BOM) represents a piece of information that can be used contextually by manufacturing company departments, suppliers, and contractors to support more efficient business processes.
Single Source of Truth (SSOT) was the main paradigm for PLM data modeling for many years. The digital world is transforming the way we work, collect and use information. In the past we wanted to buy a physical vinyl or CD with music, later we did the same, but virtually, now we just want to get access to this in a reliable way. In the old fashion driving process, we wanted to remember the road or have a paper map helping us to drive. For the last 30 years, we transformed from the system that centralized everything in the car (map, PDF, GPS nav) to a system in which we can access the right information at the right time contextually. The system itself doesn’t hold all info but collects it on demand in a smart way to make it available to you when you need it. Getting back to PLM and product development, while SSOT is a powerful paradigm, the way realization comes is changing. What was the attempt to have a SSOT as a file or as a database is shifting to the formation of infrastructure and services helping to organize and get the information as you need it for all stakeholders in the product development process including customers? It is a big transformation and it includes technological adjustment, architecture shifts, changes in user experience, and business models. Fundamentally, we are moving to a new model where SSOT is more distributed. Check more here – Is there a single source of truth between multiple applications?
Post-Monolithic Future and Semantic Layers
The future role of data is super fascinating and keeps driving more and more attention. One of the main aspects of the data management transformation in PLM systems is the move towards a post-monolithic future. While this movement is super exciting, it still lacks of many details about how it can actually happen. The paradigm of semantic modeling layer to form a connected PLM system drives more attention and interest.
The transformation process to build open semantic data modeling can take time and involved multiple steps. Like in my examples of a driving paradigm shift, the changes won’t happen overnight. The elements of the new semantic layer will be created by vendors and companies, will be consolidated, and turned into mature re-usable data elements and services.
The idea of semantic layer and data mesh that can be found in the strategy developed by Mercedes Benz Cars is interesting because it demonstrates the shift from a focus on a single PLM platform toward an open platform where the paradigm of PLM services can play the role of specific plug-n-play functions delivering specific functions and a common semantic data layers ensure interoperability between data services.
How do build ontologies for the future PLM semantic layer?
Ontologies are essential for managing data. They provide a means of organizing information so that it can be effectively accessed and processed. In the world of product lifecycle management (PLM), ontologies play a critical role in linking different data sets together. Without them, PLM systems would be unable to effectively manage large and complex data sets. This is why it is so important for PLM vendors to build ontologies into their products. By doing so, they can ensure that their customers get the most out of their data management capabilities.
One of my favorite authors in semantic data management Dean Alemang, the author of Semantic Web for Working Ontologist book, published an interesting article – How to get started on an ontology without really trying. Check this out – I found it very interesting.
PLM industry and a broader engineering and manufacturing community made multiple efforts in the development of standards. Some of these efforts created a good outcome and are in use in the industry. However, none of these initiatives, in my view, created a process to build an open data approach to building blocks of data modeling that can be used across multiple manufacturing companies.
Here is an interesting passage from Dean’s article that can be used as a pivotal approach in the PLM industry to build ontologies and used in the future applications of knowledge graphs.
If we step back a moment and think about the role that an ontology has in an enterprise knowledge graph, it is to describe the common concepts that are shared among the myriad data resources in a large enterprise. For example: do many of our data resources refer to something that we call a “Patient” or maybe a “Company”? Then the ontology is the place to describe that concept. Does this enterprise need to understand fine distinctions between different kinds of patients? Or different forms of legal entities? Then the ontology is the place to record these differences. The ontology plays the role of a common reference point so that one part of the enterprise can know what another is talking about.
When we think of it this way, building an ontology by talking to people about the domain isn’t addressing the issues that a semantic layer needs to address. We must know what kinds of things the various data resources — and applications! — in the enterprise need to talk about, and what they need to say about them.
But I have a secret to share with you; in my practice, I have rarely followed a formal use-case approach, identifying roles and telling their user stories. Why not? Because ontology engineering didn’t invent use cases and user stories; a lot of other folks use them. Which folks? Enterprise application developers, data architects, and, probably most relevant to this story, data warehousers.
So here’s the shortcut, which I have used on many occasions; find one of these ‘legacy’ warehouses, and study its schema. Depending on the foundational technology of the warehouse, that schema may be easy to query or maybe not. But regardless of how much effort you need to get that schema, it is bound to be worth it. Yes, your first ontology can simply be an expression of the schema that you take from a successful — if a bit outdated — data warehouse.
I found, Dean’s recommendation quite fascinating because it can give a way to create an automatic way fundamental building blocks of ontologies can be created from scratch. Once created, they can be improved and this is where I would like to move to my second point – PLM platforms with flexible data schema. Flexibility is a key differentiator and I’d recommend checking this aspect of the PLM platform before making any decision. It is extremely important for SaaS PLM tools. Many of them, especially early SaaS tools were developed as simplified services without thinking about the flexibility and extensibility of data schemas.
What is my conclusion?
Product lifecycle management tools are moving away from monolithic PLM architectures toward open systems capable to support product semantic representation language with the foundation of semantic web technology. Such an approach supports semantic interoperability and extensibility of information systems. To answer on demands of industrial companies, modern PLM systems will turn away from old fashion SQL database architectures toward flexible data modeling approaches including product data semantics to cover an entire product lifecycle and will be ontology-based to create knowledge graphs as a foundation of digital threads and flexible process orchestration.
One of the most critical elements of these systems will become the capability to capture an ontology from existing data systems and turn it into a foundation of modern PLM implementations. Such an ontological approach will change the main principles of PLM data management and open a new product lifecycle management with a semantic technologies foundation. Just my thoughts…
Disclaimer: I’m co-founder and CEO of OpenBOM developing a digital cloud-native PDM & PLM platform that manages product data and connects manufacturers, construction companies, and their supply chain networks. My opinion can be unintentionally biased.