PLM Platforms: Retirement Or noSQL Knock-Out?

I found interesting that nobody speaks much about PLM platforms these days. It seems to me PLM vendors and service providers are focused on the more important issues, such as industry orientation, out-of-the-box functionality, SaaS and OnDemand or even by Open Source business models. However, what happens in the PLM-platform-department? Does everything is fine and well adjusted to the weather outside? Do we have enough power to move forward with all data we have these days on PLM platforms? Can we scale up in capacity? Can we support agile system development by customers? These and many other issues came to my head. However, I wanted to focus on two specific trends: Needs to manage data for the long term and noSQL trends in data management.

Long Term Product Data
This is not a very big secret. We produce more and more data on the daily basis. Product development and manufacturing companies are not exclusion from that. Bigger companies like aero-OEMs recognized this problem time ago. Their working procedures require the need to keep data for 50+ years as well as track information about each aircraft according to the serial number. Smaller manufacturers are just coming to this place. Additional weight of the regulations moves them even faster to the point where the amount of data will come to the not controlled level. There are two aspects of long term data retention in PLM – 1/3D and geometrical data; 2/non-geometrical and process-related information. I found the most interesting project in this area is prostep’s LOTAR. So, I’m looking on the progress of this activity. However, the timeline of LOTAR is seven years, which is probably okay, when we talk about 50-year data retention.

noSQL Trends
This is a not top secret. The really big guys are not running SQL these days – Google, Amazon, Facebook… All these companies developed their own data management facilities. However, despite coolness effect, the reason behind these initiatives is simple. The ugly truth is that our good friend uncle-SQL is coming to the middle-age. And even if you cannot hear voices about SQL retirement, the question about how our life can look like “after SQL” is very much acceptable. If you are not familiar with noSQL term, I’d recommend to take a look on this wikipedia article. Also, I found the following article – The noSQL movement, written by Mark Kellog on his blog as a very interesting research in this area.

PLM Platforms Data Foundation
All PDM/PLM platforms that available on the market today are relying on SQL database technology. There is no surprise – SQL is the mainstream technology in the enterprise. I can see two potential problems related to that: change management and data capacity. The first one, change management, seems as a very critical one. Customers are struggling with the level of flexibility PDM/PLM systems can provide. Solutions built on top of SQL data is sensitive to upgrades and data model changes. PLM vendors developed sophisticated systems how to manage it. However, the problem is still in place. The second one is data capacity. This problem is not uncovered in the full scope. I believe, with the future PLM implementations, there is a real chance to discover a scale-related problems.

What is my conclusion today? I think technology matters. Big boys developed alternative non-SQL data storage options. At the time when SQL-based relational database are power our PLM platforms, vendors need to think about what next. Some initial signs to think how to manage all company product lifecycle data for 50+ years are in place. There are visible interesting alternatives. However, they required future investigation by vendors.

Just my thoughts…
Best, Oleg



Share This Post

  • Oleg:

    Are you speaking of some particular brand of SQL (e.g., Microsoft, Pervasive, MySQL, Oracle), or of ANSI Standard Query Language relational database management systems (RDBMS) in general?

    I ask because scalability is related as much to the architecture of the database itself as it is to the RDBMS upon which it resides. Furthermore, the scalability of the storage engine (that part of the RDBMS that actually stores and retrieves the data on request) is, to a great degree, dependent upon the hardware upon which it resides.

    It seems to me that, in the absence of specifics, you paint with too broad a brush in your argument. Can you be more specific, please?

    /s/ Richard D. Cushing

  • Richard, thanks for your comment! The point I want to discuss is an option to find an alternative solution to manage product data. Especially when we are talking about solution that needs to maintain data for 50+ years, there are potentially lots of questions came that beyond just performance of the specific brand (i.e. MySQL, Oracle etc.). The aspects related to the change management, ability to manage the lifecycle of data schema and models. There are no such solutions in place, for the moment (at least I’m not familiar with). However, the need is clearly coming. This is the next size solution to what we have today in production. What will be the architecture to manage that? Just thoughts… Best, Oleg

  • Oleg,
    I completely agree with your perspective in this Blog. I dealt with the same issue when implementing a PLM solution for a large customer. I ended up building a noSQL PLM solution and can outline what advantages we achieved with this approach.

    Early in the project we reviewed many PLM soltuions. Every PLM System I evaluated was built using only a relational database as backend. As you know, designing an application using an RDBMS requires knowing a lot about your data model and data relations since the application storage is based on the data model. However, PLM requirements change and the applications need to evolve. This often requires that your data model change and these schema changes often require complicated data migration steps. This “Brittleness” is one of the major sources of challenges using only the RDBMS model for PLM.

    I decided to approach the storage layer for the Zdesign PLM system in a different way. I knew the benefits of using an Object Database, and used an OODBMS as the database for storing all the configuration management objects within Zdesign PLM. Zdesign’s object storage is a very straightforward extension of the software since everything is an object within Zdesign PLM. In the object-oriented database world it is easy to change an application and data model over time because you can easily attach or change object attributes. The object database persists whole objects, so you have no obligation to tell the storage backend about object changes – the object database persists objects automatically. Zdesign provides a built-in object database in addition to connectivity to all the leading Relational databases when needed.

    The object Database in Zdesign PLM is suitable for both small and large amounts of data. Enterprise-level implementations of Zdesign PLM work with Terabytes stored in the Zdesign PLM Object Database. Scalability is accomplished on several levels, either by storage replication or the distribution of Object Database storage over multiple servers. Because of these advantages, the effort and cost of building and maintaining a large Zdesign PLM installation are smaller than other comparable storage systems.

    I hope this use case helps support the arguments you have made in this blog.
    Best, David

  • David, Thanks for sharing your experience. Object Database technology was a dream of PDM 10-15 years ago. As far as I remember, the first version of MatrixOne back early dates used Object Database. I still think that idea behind that is very powerful. However, my experience (time ago) was negative because of performance, memory management and support. What specific Database are you using? Did you build your own or tune an existent object data storage? These days I see much more options…
    Best, Oleg

  • Oleg,
    yes, the first versions of Matrix were built on Objectivity. That provided an effective DB for an OO PDM, i.e. the mapping of the configuration of types, attributes, relationships from the business administrator application to the database was very straightforward. My experience (as a customer) from that time – in addition to performance, memory management and support – is the problematic concurrency. The implementation back then locked large parts of the database for simple object updates, so that it was not possible to have more than a few users working with the system in parallel.
    With the support of Oracle, MatrixOne kept the OO model and implemented a layer that “translated” the objects to the relational model. That probably comes at the expense of some performance, but it was made up by the highly optimized Oracle implementation and the possibilities of Oracle performance tuning. And the other issues such as locking, support and memory management were also gone.
    Best regards, Jens

  • Jens, Thanks for your comment! I heard about that OO experience in old Matrix version. My take on this is that back to that time it was a very early technological experiment. However, we all know that 10-20-year timeframes can change a lot. When I’m looking on all non-SQL databases today, I see (still) a potential to provide a more efficient data management platform in comparison to SQL-based one. Best, Oleg

  • Pingback: SAP Goes for Database, What Is PLM Path? « Daily PLM Think Tank Blog()

  • Pingback: El matrimonio es una ordenanza de la creación()