PLM modeler software wasteland

PLM modeler software wasteland

My article earlier this week about complexity and challenges of MBSE and PLM models raised a good number of comments and online conversations. One of my favorites was a discussion with Pawel Chadzynski, Senior Director, Product Management at Aras Corporation. He was also an author of the MBSE presentation. You can catch up on most of parts of this discussion in the LinkedIn copy of my blog – Achilles Heel of MBSE and PLM modeling.

One of the topics in our debates was about “central PLM ” vs “Data links” paradigms. While we are still debating the difference and the purpose online, here is an interesting passage Pawel use to explain the nuances of data links vs content aware CM services.

Oleg Shilovitsky Don’t worry about English nuances, it is my 2nd language after all 🙂 so ESL is already part of PLM! In any case English is a living thing, is it not – so I’m having a bit of fun here with terminology. In that spirit a link is more like a URL which is not aware of the CM aspects of the connection or the items that it connects. Relationship on the other hand is aware of all of that. The importance of a central PLM platform (has to have the right architecture as opposed to a BOM managing PDM) is that it already has all the underlying CM services for everything that can be modeled using that platform (not only BOM structures). So yes, the central PLM model will have to have “relationships” to other databases. What else can be done? Create yet another database for links only? Or create direct links between various databases that don’t themselves understand CM? So now one has to create new CM/lifecycle services around those links in the link database or in the linked databases? How is that simplifying implementation of CM and lifecycle management (traceability) across all abstractions and states of a complex system/product? That is a question for you :). Sounds like a great subject to explore with a panel discussion!

Here is the thing…  I can see Pawel’s point about models and software “don’t understand” model. To solve this problem PLM vendors are developing complex modeling services helping to define data and maintain operations on the data.  Also very logical thing to do. And, of course, if you have one modeler, why do you need another one? Mine is better says every PLM vendor. Meantime, the complexity is growing and PLM modeling engines present fundamental flaw of existing PLM paradigm – control over the data and complex software engines.

Let me introduce you to the topic of software wasteland. I can see examples of software wasteland everywhere these days.  The article by David McComb – Are You Spending Way Too Much on Software?  is bringing a topic of software efficiency and alternatives to develop data-aware software. I know David long time from his work in semantic technologies and data models.

The following passage is brilliantly describing current PLM software problem in many organizations.

Companies are allowing their data to get too complex by independently acquiring or building applications. Each of these applications has thousands to hundreds of thousands of distinctions built into it. For example, every table, column, and other element is another distinction that somebody writing code or somebody looking at screens or reading reports has to know. In a big company, this can add up to millions of distinctions.

But in every company I’ve ever studied, there are only a few hundred key concepts and relationships that the entire business runs on. Once you understand that, you realize all of these millions of distinctions are just slight variations of those few hundred important things.

In fact, you discover that many of the slight variations aren’t variations at all. They’re really the same things with different names, different structures, or different labels. So it’s desirable to describe those few hundred concepts and relationships in the form of a declarative model that small amounts of code refer to again and again.

CM aware services is a great example of software that contains too much specific logic that hard to discover, maintain and integrate with. Each PLM ‘modeling software’ is coming with its own modeling semantics you need to learn, adopt and mostly important to map on all other paradigms used by different pieces of enterprise, engineering and manufacturing software. Each one is own by vendor(s) and has sponsors and supporters in different part of organization. The situation is even worst in multiple organizations. The semantic difference between Link and Relationships requires specific language skills. Complex logic embedded into software requires understanding and maintenance.  It makes many engineers and IT specialists to run away from PLM software because of its complexity.

For the last 20 years, PLM industry navigated from  simpler to more sophisticated and complex models. Current platforms cost too much and their sustainability is questionable. Even some of them, like Aras are developing highly configurable modeling engines, the internal core and logic is owned by Aras and not open-sourced. So, outside of Aras or in case of Aras acquisition, companies will be facing the same problems of legacy data, outdated software code and growing maintenance complexity. The cost of PLM modeling engines is too high. Conceptual models are complex and create a barrier between “PLeMish speaking people” and rest of the organization. A successfully implemented PLM modeler reminds me an industrial building with central power shaft powering everything in the building. This is a paradigm of PLM connecting every single piece of information into a single version of truth of PLM model.

What could be a new concept that will replace PLM modeler? Data is a new oil and focusing on data and data representation can give us a better solution. Here is a concept proposed by Dave McComb in the same article:

Software is just a means to an end. A business runs on data, and you make decisions based on data. You should be employing software to make better use of that data and create new data. You’ll need to unearth and inventory the rules in your enterprise, then determine which rules are still valid. The rules you keep — the few hundred key concepts and relationships — need to be declared at the data layer so they can be updated, reused, and managed. If you leave them buried in the application code, they won’t be visible or replaceable. In older systems, huge percentages of all of these buried rules are obsolete. They specified something that was true years ago. You don’t do things this way anymore, but you’re still supporting all that code and trying to manage the data associated with it. That’s just waste.

How to translate the concept into practical approach? PLM industry need to create a modeling paradigm. The simpler modeling paradigm is better. RDF / OWL few years ago was promising, but complexity practically killed it in the implementation. Modern graph paradigms can be a possible solution. Modeling paradigm with persistent and granular storage system that can scale beyond monsters of single relational database schemas. It will become a data foundation.

But what will be the role of software? How actually application will be developed and performing? This is a time to bring micro-service architecture in play. But learn from mistakes of last few years of RESTful API publishing by PLM vendors.  These services are mapped to existing PLM modeling engines and highly dependent on the same complexity layer. Micro-services must encapsulate data storage and work with data modeling foundation. These services will  provide a set of building blocks capable to handle data rules embedded into semantic layer of data. Granularity is a key. By making data models transparent and self-contained we can simplify application delivery and maintenance.

An important question to ask – what will happen with existing PLM modeling and PLM software? Some of them will continue their lifecycle trajectory and will be eliminated as a waste over the time. However, re-architecture of PLM modelers by applying granular data modeling principles allowing them to scale and be reused as microservices can give them a longer life. This is a place PLM architects should be focusing more these days.

What is my conclusion? We are coming to a logical end of single PLM modelers with flexible data models. The complexity and cost is too high. Lifecycle of PLM software is too long to make them sustainable and manufacturing businesses are struggling with concepts of overlays, bi-model approaches and all other ways to create a new application layer on top of existing applications. Data development can help to bring a sustainable data modeling layer supported by application micro-services. It will allow manufacturing companies  to develop apps based on data focusing on specific functions.  These micro-services can scale and developed beyond “single vendor PLM software”. It is kind of app store, but much more robust and sophisticated. Each of these applications will be updating transparent and open data models (databases) in the way each of them will be only interacting with data. It will simplify integration application logic because data will be an integrating layer opposite to many “integration services”. The focus will be on data integration and not application integration. So, getting back to Pawel’s comment, data will be semantically rich and will provide a way to create data driven connected layer instead of developing complex PLM modelers. PLM modelers will be replaced with data layers integrating data in multiple domains and organizations. Just my thoughts…

Best, Oleg

Want to learn more about PLM? Check out my new PLM Book website.

Disclaimer: I’m co-founder and CEO of OpenBOM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.

 

Share

Share This Post

  • Pingback: GHZ Partners | IT Services, Software()

  • That “relationship” has to include one system being able to ensure the other system persists the data. it is one thing to know the other data exists and have a relationship with it. It is something else entirely to ensure that data exists for as long as you care about the configurations of which it is a part.

  • beyondplm

    Stan, thanks for clarification! Does it mean two connected things aren’t entirely independent? Does it mean the first one (Eg. configuration) doesn’t exist without second one (Eg. Part)?

  • You want the elements of configurations you care about to live on for as long as you need them.

    If there is a single, central source of truth you can manage it that way. But what if some of the components that are part of a configuration are managed by a system to which you only link? How will that system know that that component needs to persist in the same link location for the duration necessary?

    To me that is why linking is not enough. The systems need to have some way to communicate the purpose of the link, the persistence required, etc.

  • beyondplm

    Stan, It is possible to make a link to hold information about everything. Link can change or even die if objects this link is connecting are dead or changing. However, the question how to manage consistency of information in two independent systems. According to Pawel, lifecycle service awareness automatically brings the need to have a “single PLM service”.

  • Don Rice

    What was not clarified was whether the micro-services apps would be from the same vendor. From the tone of the discussion, I assumed they did not have to be. On the other hand if they are not, compatibility issues arise as software updates are implemented.

  • beyondplm

    Don, this is great question! In my view, micro-services can come from multiple vendors. It is possible to hit the same problem of compatibility, but it is a question related to planning and business modelling. In my view, the best example to think about it is to compare these microservics with some other examples of developing using open source and cloud infrastructure. Billing service, log, account management, chats and collaboration. You can find many services you can integrate into your application these days. Why problem of compatibility doesn’t exist? Because of business models and need to use. Can the same might happen in PLM? I think so, but industry should come to the certain level of maturity to make it happen.