The end of single PLM database architecture is coming

The end of single PLM database architecture is coming


The complexity of PLM implementations is growing. We have more data to manage. We need to process information faster. In addition to that, cloud solutions are changing the underlining technological landscape. PLM vendors are not building software to be distributed on CD-ROMs and installed by IT on corporate servers anymore. Vendors are moving towards different types of cloud (private and public) and selling subscriptions (not perpetual licenses). For vendors it means operating data centers, optimize data flow, cost and maintenance.

How to implement future cloud architecture? This question is coming to the focus and, obviously, raising lots of debates. Infoworld cloud computing article The right cloud for the job: multi-cloud database processing speaks about how cloud computing is influencing what is the core of every PDM and PLM system – database technology. Main message is to move towards distributed database architecture. What does it mean? I’m sure you are familiar with MapReduce approach. So, simply put, the opportunity of cloud infrastructure to bring multiple servers and run parallel queries is real these days. The following passage speaks about the idea of how to optimize data processing workload by leveraging cloud infrastructure:

In the emerging multicloud approach, the data-processing workloads run on the cloud services that best match the needs of the workload. That current push toward multicloud architectures provides the ability to place workloads on the public or private cloud services that best fit the needs of the workloads. This also provides the ability to run the workload on the cloud service that is most cost-efficient.

For example, when processing a query, the client that launches the database query may reside on a managed service provider. However, it may make the request to many server instances on the Amazon Web Services public cloud service. It could also manage a transactional database on the Microsoft Azure cloud. Moreover, it could store the results of the database request on a local OpenStack private cloud. You get the idea.

However, not so fast and not so simple. What works for web giants might not work for enterprise data management solutions. The absolute majority of PLM systems are leveraging single RDBMS architecture.  This is fundamental underlining architectural approach.  Most of these solutions are using “scale up” architecture to achieve data capacity and performance level. Horizontal scale of  PLM solutions today is mostly limited to leverage database replication tech. PLM implementations are mission critical for many companies. To change that would be not so simple.

So, why PLM vendors might consider to make a change and to think about new database architectures? I can see few reasons – the amount of data is growing; companies are getting even more distributed; design anywhere, build anywhere philosophy comes into real life. The cost of infrastructure and data services becomes very important. In the same time for all companies performance is an absolute imperative – slow enterprise data management solutions is a thing in the past. To optimize workload and data processing is an opportunity for large PLM vendors as well as small startups.

What is my conclusion? Today, large PLM implementations are signaling about reaching technological and product limits. It means existing platforms are achieving a possible peak of complexity, scale and cost. To make the next leap, PLM vendors will have to re-think underlining architecture, to manage data differently and optimize cost of infrastructure. Data management architecture is the first to be considered. Which means end of existing “single database” architectures. Just my thoughts…

Best, Oleg


Share This Post

  • I’m just a little curious. Has any company truly realized a true cradle-to-grave single database system – even if it’s multiple databases under one software product? I ask this question because I truly don’t know, never seen one. I’m sure there’s a mid-sized company out there somewhere, but all the big ones have always been divided. If anyone has examples, I’d love to know. I’m just wondering if I’m better off looking for a sasquatch or not.

  • beyondplm

    Ed, I feel we look in opposite directions. You look “outside of PLM” and I look “inside of PLM”. You question “overall company database management system”, which I doubt is possible – ERP, PDM, PLM…. not sure you can remove pieces from that puzzle. On my side, I’m mostly talking about “PLM architecture” as a system relying on “single database storage/tech”.

  • Colin Bull

    I have worked on a system for a company where they employed the CAD company PDM. This assembly or cad has been released that would trigger an upload to a master PLM system to incorporate into an overall BOM.

  • beyondplm

    Colin, thanks for this example! To transfer data between CAD/PDM environment into single central PLM is very often scenario. If company is working with multiple CAD solution the chance to see multi-PDM + PLM architecture is very high.

  • Pingback: How break limits of existing PLM architectures()

  • Pingback: How break limits of existing PLM architectures | Daily PLM Think Tank Blog()