In my earlier blog I demystified the notion of “monolithic” PLM marketing and shared some technological aspects related to PLM system architectures. The topic is not simple from all standpoints – technological, conceptual and even emotional. It is not a surprise that PLM vendors have their own definitions of monolithic applications. The demand of manufacturing companies is to grow their PLM technologies and products into innovation platforms and it will add lot of topics on the discussion table of PLM IT architects, analysts and vendors. And made me think more about PLM and future scalability. Below I will outline 3 dimensions to scale PLM systems. These dimensions are not specific for PLM systems. However, I added some examples and context so manufacturing companies and related PLM vendors will be able to apply it to their work.
Horizontal scaling
The problem of horizontal scaling is probably the oldest one and PLM vendors are very much familiar with the need to add more computing resources (CPUs), memory and storage to PLM servers. Adding resources is not simple and cannot be done without limits (technical and financial). In monolithic applications, horizontal scale can be a big problem. To solve it, you can consider running multiple instances of applications on different servers behind load balancer. Not all applications are ready to scale in such way. Another potential drawback is related to the need to run all applications against the same instance of data. Data model is usually the most complex element of PLM system. Regardless of the product these date models are not capable to be used with multiple instances of application.
Vertical scaling
Typically vertical scaling means adding resources (CPUs, memory) to servers. But this approach has limits. If monolithic systems is too big to scale, you can consider to split it into components. In such way, application will be split into set of services. Each service (mini-application) will be running independently and responsible for a specific function. There are multiple ways to decompose application and can be done for new development without significant problem. However, to do it for existing monolithic software can be a big deal. In such way PLM systems developed 15-25 years ago, can have level of difficulties to be separated into services.
Data scaling
One of the most interesting aspects of scale is data scale related to development of granular data sets. In such approach, each service is responsible for only a subset of data. Specific components of the system can be responsible to orchestrate requests to a specific server or data element. Most of PLM systems are designed with Database = Organization state of mind. In such approach splitting data into functional segments can be a hard task. Cloud based systems usually have more options to scale data backend compared to on premise systems managed by company IT.
What is my conclusion? The demand to scale in PLM systems is huge. Modern manufacturing company is a global organization with high level complexity of data, deployment, sophisticated relationships and processes. Latest development of IoT and related technologies is adding an special level of scale problem of significant amount data processes by systems. To scale existing PLM systems will be a high priority task for PLM vendors. Manufacturing companies have to check system architectures before planning to deploy and scale existing PLM systems. Existing monolithic PLM systems have their limits that not always can be resolved without significant architectural changes. It will be an interesting and busy time for PLM architects and technologists. Just my thoughts…
Best, Oleg
Want to learn more about PLM? Check out my new PLM Book website.
Disclaimer: I’m co-founder and CEO of openBoM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.
Pingback: GHZ Partners | IT Services, Software()