Traditional PLM RDBMS architecture is too expensive and won’t scale for the cloud

Traditional PLM RDBMS architecture is too expensive and won’t scale for the cloud


For many years, databases is the most critical element of PLM infrastructure. After all, PLM systems are data management systems capable to hold information about products, engineering models and related stuff. In fact, all PLM systems available on the market today are using relational database (RDBMS) to store and manage information. The short list of RDBMS is Oracle, Microsoft SQL server and DB2. These are expensive choices, but PLM is an expensive software after all.

Cloud is trending. The adoption of cloud systems is increasing and I can see a growing awareness and interest of manufacturing companies about cloud software. I caught the interesting number in the recent webinar poll presented by Chad Jackson of Lifecycle Insight. Read more here – 78% of respondents are either implementing cloud solutions or assessing and gathering information about the cloud.

All PLM vendors are thinking about how to move their existing systems to the cloud. This is an interesting exercise, because these systems were not implemented for cloud infrastructure. According to the same webinar, all existing PLM systems were build from back in 1990s, using pre-internet technology, developed as toolkits and specialized for on-premise technologies. Although, I might disagree with the dates, but even the newest PLM systems built 10 years ago are using the same RDBMS driven technology.

Because database is such a fundamental thing in PLM software, RDBMS will be the key element to define options for any PLM vendor about how to migrate the software to the cloud.

DATAVERSITY article Four Database Option for Migrating Applications to the Cloud by Jeff Boehm, Chief Marketing Officer of software company NuoDB, gives you some idea about what a typical enterprise software company can face when deciding to move existing PLM architecture to the cloud. Read the article and draw your opinion.

Here is the important passage, which is addressing the challenges of relational databases when migrating to the cloud environment.

Dominated by megavendors (e.g. Oracle, Microsoft SQL Server, IBM) and open source options (e.g. MySQL, Postgres), these databases have an advantage in that traditionally on-premises applications are already architected to support two or more of these databases. As a result, you can minimize application layer changes and reduce time to market.

Unfortunately, such monolithic, single-server systems often struggle to capitalize on cloud advantages such as on-demand capacity, commodity hardware structures, and distributed computing. The result is that this strategy can easily result in the highest total cost of ownership and an architecture that does not scale elastically with the cloud applications.

The following picture (credit to CoachDB slide deck) can provide a good visualization of the problem.


I like the following conclusion made in the DATAVERSITY article. It makes sense to me.

Ultimately, migrating an application to the cloud means identifying the right cost and architecture structure that satisfies customer requirements while keeping the total cost of ownership low.

So, what is the right architecture for the cloud? The article took me back to think about one of my blogs and presentations I’ve made few years ago – PLM and data management in 21st century. Database and data management technology is going through cambrian explosion of different options and flavors. It is a result of massive amount of development coming from open source, web and other places. Database is moving from “solution” into “toolbox” status. Single database (mostly RDBMS) is no longer a straightforward decision for all your development tasks.

What is my conclusion? Existing PLM architecture were created at the time it was absolutely reasonable to leverage single database platform such as RDBMS. These databases are mainstream solution of every single enterprise IT organization. Hence PLM system on top of such infrastructure was an easy sell to any IT organizations comparing to advocating for open source databases and hybrid solutions. To take the same architecture and migrate it to the cloud can be a reasonable step for “cloud servers” stage of cloud IT transformation. But, these architecture will cause the highest total cost of ownership and will introduce limits in elasticity of the systems. You might consider to apply polyglot persistence principles in database architecture for future PLM cloud solutions.This is a wake-up call to all PLM architects advocating how to migrate existing PLM architectures to the cloud. Just my thoughts…

Best, Oleg

Want to learn more about PLM? Check out my new PLM Book website.

Disclaimer: I’m co-founder and CEO of openBoM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.


Share This Post

  • Hi Oleg, so would you refute your comment 4 years ago ? 😉

  • beyondplm

    Hi Yoann, thanks for asking – this is a great question!

    My old comment is related to “data modeling concepts” and the way these concepts are realized in the database. As we discussed earlier, you can implement it in RDBMS, Graph database or XML database. From that standpoint, data base won’t be an issue. The topic I’m addressing in this article is related to compatibility between traditional RDBMS driven architecture and cloud.

    So, while the first comment is more about PLM paradigm, the second is pure technical / architecture concept. RDBMS based architecture cost is too high for the cloud – it is only my opinion of course. Also “price” is a relative term. What is costly for mid size manufacturing company can be very affordable for large OEM. What I learned recently is that large IT organizations are buying “bulks” of virtual server resources on Azure or Amazon and use them similar to how IT leased Dell racks before.

    Best, Oleg

  • Good point, sorry, I should have posted that on the other article you were refering to (, where you were comparing technologies. In various blog post I discussed the opportunity for a sort of MDM layer to provide data search and modeling capability which would be good in any circumstancies by relying on various technologies. It would be nice to have more insight on how OpenBOM was built, not necessarily a full transparent presentation, but maybe just a way to understand why companies may need differente DB technologies for different parts or capabilities of their App.

  • beyondplm

    Yes, the post you mentioned is relevant. The problem I outlined in this blog is also outlined in the presentation. Here is the slide.

    openBoM ( is relying on micro-service architecture and database as a service layers that can scale globally.

    I recommend you the following book to understand a potential difference between different database technologies available today.