Ambi-mongo, neo-retro and future PLM paradigms

Ambi-mongo, neo-retro and future PLM paradigms

new-plm-data-management-paradigms

ZDNet article – In database category race, candidates turn non-partisan brings an fascinating perspective on modern development of data management technologies. It brings up few interesting trends with very funny buzzwords – Ambi-Mongo, Neo-retro and some others. Read the article and draw your conclusion. One of the main points in the article is related to reconciliation of different database management architectures originally introduced as a different classes of NoSQL database stack.

Here is an interesting passage:

Almost eight years ago, the term NoSQL was coined, describing a new class of database. Products and open source projects in this category freed users from systems that could not accommodate variability in table schemas, that made clustered, geo-distributed infrastructure a major hassle, that employed different data representation paradigms than do most programming languages and which blocked operations while new data was written to the database.

But with these new freedoms, many RDBMS features that are so fundamental to most line-of-business systems were lost, going well beyond de-emphasis of the Structured Query Language (SQL) itself. Fundamental features that were sacrificed included indexes on columns other than the primary key, explicit joins between tables and so-called “ACID” guarantees — where updates to one part of the database and compensating changes to other parts (like a credit, and a matching debit) happened indivisibly.

Rapprochement. The reconciliation of these database management architectures is finally in full swing. This week’s announcement from MongoDB (the canonical document store NoSQL database) of its eponymous database’s upcoming 3.4 release, along with the 3.1 beta release of graph database-in-chief Neo4j two weeks ago, are a big part of this. Other changes too, have been piling up, each one a peacenik’s daisy in bringing an end to the database wars.

Database is one of the most critical elements of PLM architecture. Since early PDM and PLM development RDBMS (Relational Database Management Systems) is a technology that was used practically by all vendors to manage data. One of the outcomes of the last decade of web and internet development is a broad innovation in new technologies and systems to store and manage data. The innovation was driven by explosive growth of data and the need to manage this data in a scaleable way. The name behind the trend was NoSQL, but it merely explained the details.

My 3 years old article PLM and Data Management in 21st century is a good starting point to explore variety of new data management technologies. The following table is capturing my take back in 2013 about what each NoSQL technology can bring to PLM development.

PLM-and-database-options

However, database technology and the way we management data is only one way to look at data management. The other side of the coin is existing PLM paradigms that are heavily related on “database in the middle” that can centralize and manage the data. It was good for large and centralized factories. At the same time, manufacturing world around us is changing – the paradigm of network is actually the one that can provide a better reflection of modern manufacturing trends. Check out my article – Innovation, Networks and PLM database paradigms.

For such manufacturing networks, centralized company oriented database architecture can be sub-optimal. A good example is company such as Local Motors that defines distributed global community as a main secret and microfactories as a core element in their success. But I can see a growing number of large manufacturing companies asking questions how to go beyond existing paradigms of central database to store and manage a disparate sets of data and distributed processes.

local-motors2

PLM vendors are actively transforming existing product suites to cloud. As the process of cloud transformation is speeding up, the question about how traditional RDBMS based architectures will scale for the cloud becomes very important.

To migrate RDBMS based architecture  to the cloud can be a reasonable step for cloud servers stage of cloud IT transformation. But, these architecture will cause the highest total cost of ownership and will introduce limits in elasticity of the systems. You might consider to apply polyglot persistence principles in database architecture for future PLM cloud solutions.This is a wake-up call to all PLM architects advocating how to migrate existing PLM architectures to the cloud.

What is my conclusion? Reconciliation of different database technologies will introduce lot of new opportunities for PLM developers to rethink the way data is stored across the network of manufacturing companies. New data modeling  combined with the ability scale using cloud technologies can introduce a new paradigm for PLM systems. Just my thoughts…

Best, Oleg

Want to learn more about PLM? Check out my new PLM Book website.

Disclaimer: I’m co-founder and CEO of openBoM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.

Share

Share This Post