How to break limits of existing PLM architectures

How to break limits of existing PLM architectures

break-limits-plm

The conventional opinion of many people in PLM domain is that technology is not a main problem in PLM industry. At the same time, PLM vendors having significant challenges to convince customers to adopt new versions of their products. Manufacturing companies are replacing PLM platforms every 10 years (some people can even come with 20 years benchmark). When you think about technological improvement, even ten years is a number which can put any manufacturing company back in dinosaur era in terms of what tech they are using.

My attention was caught by article  The Past and Future of Systems Management written by Ben Horowitz. Take some of your time during the weekend and read the article. If you have more time, I can recommend you Ben’s book – The hard thing about hard things. I found Ben’s insight about new cloud based architecture very important to understand for future development of PLM products. According to that, traditional systems would not work for modern, massive, cloud-based architectures. In fact, they would not work properly for cloud-based architectures of any scale. One of the most interesting points I captured is related to the move of system architecture “from servers to services” and the fact applications are now collection of micro services.

Traditional systems are server centric — Even relatively modern systems management products like New Relic treat servers as sacred resources which must be kept alive, but Facebook loses servers every day and it doesn’t matter. Facebook doesn’t care about servers; they care about services. Knowing when a cluster of services that provides, for example, an identity service is out of capacity is critical, but getting paged in the middle of the night because you lost one server in a cluster of 20 is asinine. Applications are now a collection of micro-services — These micro services are often managed by separate teams with all sorts of upstream and downstream dependencies. Having a solution that tracks all the relevant metrics across all the services fosters a much more collaborative environment where teams can communicate with one another (versus logs, where only the developer who wrote the app can really understand what’s going on).

It made me think again about existing PLM technologies and architectures. Most of them are 15-20 years old and they are completely server and database centric. Few years ago, I explained that in my Future of PLM databases article. In my view, the end of single PLM database architecture is coming . The new PLM system architectures can change a way customer can adopt and manage their PLM environments. Here is the idea to think about.

plm-tech-step-outside-rdbms

All existing PLM products are developed on top of existing database technological stacks. Nothing wrong with that, but here is a problem – the scale. The amount of data PLM systems have to handle is growing in scale and reach too. Manufacturing companies are dependent on significant amount of information originated and maintained outside of organization – product catalogs, supplier and other reference information. In addition to that, in many situations, the data is owned by multiple companies – not a single OEM. How traditional PLM platforms will handle that?

What is my conclusion? The conventional wisdom of PLM architectures and implementation is to put information in a single database. It must change. Modern engineering and manufacturing environments are different. It is more likely network of sacred resources rather than single PLM database. New product architectures and technologies should come to handle that. Just my thoughts…

Best, Oleg

Image courtesy of ddpavumba at FreeDigitalPhotos.net

 

Share

Share This Post