Earlier today, I attended the PI DX online meeting – Is Digital Thread Doomed WIthout Open Architecture. The virtual event was announced as follows:
The digital thread is designed to track every process through a product’s life from initial development, manufacturing, servitization and is the ultimate communicative framework for collaboration. They need a continuous information flow and are only as powerful as their data connections. Without a successful digital thread, MBSE and real time analysis is not possible.
Across organisations and their supply chains, IT architectures must integrate data from systems usually customised by role, product, vendor or organisation. This drastically hampers the ability to integrate and share data in real time- massively obstructing the digital thread.
The presentations covered topics of product development and use of open-source tools, semantic technologies, and system engineering, connecting applications in the product lifecycle. The conclusion of the meeting was a discussion panel – Panel: How Can We Make Open Architecture a Reality? The focus of the panel was to discuss what modelling methodologies can be used to allow for collaboration in the digital thread? The panel was led by David Sherburne, former IT executive and now consulting, and included representatives from Airbus, Ford, RIT, openCAESAR and America Credit Union.
The topic resonated because it touched one of my favorite topics – data, data sharing, data traceability, and intelligence. The discussion also covered multiple approaches to solving the problem.
Open Hunt for Data
Data is a challenge and solution at the same time. The agreement across the board in the meeting is that there is an open hunt for data. The reality of every organization is to have hundreds of applications storing and managing data. How to get data out of the applications and start sharing it in real-time is somewhat companies would like to approach. As much as it sounds like a goal, it brings many questions about the target for this data journey. Making data available is a cliche. To decide how to store the data and update it in real-time is not a simple task. The pressure on the vendors is to keep their repositories open, to provide APIs, and mix data to produce intelligence.
Single Source of Truth
The truth is distributed. I said it first in my article a few years ago – What is PLM Circa 2020s. The need to extract data from all systems and store it independently raises the question of how to keep data consistent and how to manage a single source of truth. APIs, protocols, data models, tools. All these elements must be interplayed together.
Many Tools and No Method?
There is no agreement about tools and methods. The last 20 years of data management technology development created many tools capable of handling data – from management, processing, and intelligence. Semantic Web, Graphs, Ontologies, System Engineering, Data Lakes. You name it… The tools can be different, but the biggest challenge is to provide a strategy on how to use tools to handle the data and how to disconnect the data from source applications.
Digital Thread vs Data Piping
Data from multiple tools and repositories must be connected and traceable. This is a Digital Thread for dummies. While the definition is simple, achieving the Digital Thread is harder than it seems. The main reason is actually the need to harmonize multiple repositories and figure out how to stop pumping data between tools and charging everyone for translation. Without doing so, we won’t have a Thread – it will be data pipes. The data is continuously pumped between the tools going from one application to another.
From Tools and Pipes to Data Services
The discussion made me think about an opportunity to simplify the process of digital thread creation. The fundamental difference between the current status quo and desired outcome will be achieved by requesting all applications to provide a set of data services that can be consumed in real-time. Think about the old paradigm of dynamic link libraries and later component technologies. Both allow you to re-use modules and functions rather than implement them again. The same approach can be used by turning all applications into online data services. Once it is done, these data services will become a foundation of the thread connecting them together. But the data will live inside these services, which will ensure continuity and consistency.
What is my conclusion?
Modern online data management technologies provide a way to turn old applications into modern online data services. The data service approach is how the internet these days works. If you need data about the weather, commodity price, weather forecast and many others, these online services can give you data by request. REST APIs can make these available through the standard application calls, which by its own will recombine the data and present it in the way demanded by the customers. The most interesting opportunity is to have a data aggregator opportunity to collect data from multiple companies, departments, customers together and expose another data service to produce such a desired data intelligence. You can start building digital thread by building data services and pieces of data together. Just my thoughts…
Disclaimer: I’m co-founder and CEO of OpenBOM developing a digital network-based platform that manages product data and connects manufacturers, construction companies, and their supply chain networks. My opinion can be unintentionally biased.