A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

When CAD, PDM, PLM Platforms Shift, What Happens to the Product Data?

When CAD, PDM, PLM Platforms Shift, What Happens to the Product Data?
Oleg
Oleg
24 August, 2025 | 7 min for reading

In two recent articles, I reflected on the lifecycle of engineering software. The first explored 30 years of PDM evolution, how it moved from a CAD add-on feature to a product data backbone and then, in many ways, back again 30 Years of PDM Evolution: What Changed and What Didn’t? In my second article I asked a question about longevity of existing PDM and PLM providers – Are today’s PLM leaders immortal? As much as we love the leaders, the answer, based on history, is no. Even dominant platforms eventually fade How Immortal Are Today’s PLM Leaders?.

In my article today I want to focus on helping manufacturing companies and PDM/PLM practitioners to think about their software and data strategies. We live in the transformative time. What will happen to my IP, data, projects, and other information? How should companies think about the long-term survival of their data? How to operate today and preserve it for the future?

The Data Longevity Problem

Engineering and manufacturing teams depend on software that is, in many cases, decades old. CAD systems developed in the 1990s still run mission-critical operations (SOLIDWORKS, AutoCAD, CATIA, NX ,Creo, Autodesk Inventor, and many others). Many companies rely on heavily customized PDM and PLM platforms built 20–30 years ago, or even sometimes homegrown legacy tools.

Vendors, meanwhile, promote “new platforms” that often amount to hosted versions of the same old architectures, rebranded as cloud. At the same time, truly cloud-native platforms are emerging, but adoption is partial and cautious. And now, AI has entered the scene, changing the conversation yet again by promising new insights—if only the data is ready.

The core challenge is this: CAD design and product lifecycle data must live for decades, long after individual software platforms have been retired, rebranded, or sunset. This isn’t just a technical problem, it’s a business continuity issue. From legal compliance to customer support to the preservation of intellectual property, companies must ensure their product data survives the lifecycle of any particular system.

And there is another dimension: knowledge preservation. As experienced engineers retire, much of the tacit knowledge about how products were designed and managed risks being lost. Unless the data is structured, connected, and accessible, future teams may struggle to understand decisions, trace designs, or reuse valuable know-how.

Data Longevity in Today’s Landscape

The current CAD/PDM/PLM landscape is a patchwork of old and new:

  • CAD files remain dominant. Despite advances in cloud CAD, most design data is still locked in file formats. These files are durable, but they are also siloed and proprietary.
  • PDM platforms linger. Some, like Solidworks PDM or Autodesk Vault, are maintained by vendors. Others, like Agile PLM, SmarTeam, MatrixOne, and Intralink, are being retired but continue to hold critical historical data.
  • PLM backbones are over-customized. Many enterprises run massive PLM implementations that depend on custom workflows and schemas created 10–20 years ago, making migration risky and expensive.
  • Cloud systems are emerging but incomplete. They offer scalability and collaboration but raise new concerns about vendor lock-in and data accessibility.
  • AI depends on data quality. AI cannot magically interpret messy, inconsistent, or inaccessible product data. Without structure and traceability, AI results will be unreliable.

The lesson is simple: applications come and go, but data must endure.

Five Data Strategies to Reduce Risk

If data longevity is the goal, how do we get there? Here are five strategies to consider.

Strategy #1: Separate Data from Applications

The first principle is independence. Data should not be locked inside the proprietary schema of one vendor’s application.

Practical steps:

  • Favor systems that support open formats (Geometry files, STEP, XML, JSON, CSV).
  • Prioritize vendors who expose APIs for full export, not just selective reporting.
  • Create processes to periodically export data into neutral, reusable structures.
  • Use APIs to access data in a logical way ready for backups 

Think of it as “owning your data” rather than “renting it through an application.”

Strategy #2: Think in Threads and Connections, Not Silos

In the past, companies looked for one system to rule them all. But monolithic solutions rarely age well. Instead, resilience comes from connections, not silos.

This means investing in:

  • Federated architectures where data lives in multiple systems but is connected.
  • Digital threads that trace relationships between requirements, designs, parts, manufacturing, and service.
  • Linking strategies that allow information to move across systems rather than being trapped within one.

The more semantically connected your data is, the easier it becomes to replace or upgrade applications without breaking the chain of knowledge.

Strategy #3: Cloud with Exit in Mind

Cloud adoption is inevitable, but companies should approach it carefully. SaaS platforms reduce IT burden but can create the ultimate lock-in if data is only usable within the vendor’s ecosystem.

Key questions to ask:

  • Can you export all your data in usable, standard formats?
  • Are there bulk export mechanisms that provide complete datasets, not just partial views?
  • Does the vendor rely on open protocols and APIs (REST, GraphQL), or only on proprietary connectors?

The guiding principle: cloud is good for operations, but your exit plan must always exist.

Strategy #4: Prepare for AI, But Don’t Build on Sand

AI is transforming the conversation around engineering software. Yet AI models are only as good as the data foundation. Garbage in, garbage out.

Companies need to:

  • Ensure BOM consistency—part numbers, attributes, and revisions must be reliable.
  • Invest in metadata quality—capturing design intent, linking requirements, and making context explicit.
  • Preserve revision traceability—AI must know which version of a design it is reasoning about.
  • If developing custom LLMs, plan for their portability. Models should be trained on data representations that are independent of a single vendor’s infrastructure.

In other words: don’t rush to build AI copilots on top of weak, fragmented, or untraceable data.

Strategy #5: Continuous Data Preservation, Not One-Time Migration

Historically, companies treated data preservation as a migration project—something painful but necessary once a decade. That mindset is risky.

Instead, treat data longevity as a continuous practice:

  • Regularly export data into neutral storage formats.
  • Maintain a parallel archive (e.g., a data lake or graph) that captures essential relationships and history.
  • Validate archives periodically to ensure they remain usable.

Think of this as a backup strategy for product data—not just for disaster recovery, but for long-term resilience.

Data-First and Data-Redundancy Strategies

In the past, the playbook was simple: pick the “right vendor” and commit to their ecosystem. But history shows that no vendor is truly safe from disruption. The shift must be from software-first to data-first thinking.

That means building semantically connected datasets that can survive independently of any single application.

Technical approaches include:

  • Graph databases and semantic web standards (RDF, OWL) for representing data and  relationships in a complete way not dependent on proprietary data schemas 
  • JSON and XML for flexible, machine-readable exports.
  • Neutral engineering formats like STEP, PDF/A, and CSV to ensure accessibility decades from now.
  • Geometry data formats that are de-facto standards and used by many applications and platforms. 
  • Data storage strategies that combine cloud platforms with continues backup / restore capabilities, ensuring redundancy.

Importantly, redundancy should not be seen as inefficiency. Just as companies keep financial backups and redundant IT systems, redundant data storage is insurance. It ensures that product knowledge can be re-used, re-linked, and re-interpreted in the future, even as applications evolve.

This is also the best way to prepare for AI. By creating semantically rich, connected data, companies ensure that future AI copilots and agents can query, contextualize, and reason over product data without being trapped by the limitations of past systems.

What is my conclusion? 

We need to move from software selection to data strategy. 

History makes one lesson clear: no software is immortal. CAD, PDM, and PLM systems rise, dominate, and eventually fade. The winners are not those who chose the “perfect system,” but those who built strategies that allowed their data to outlive the applications managing it.

The mindset must shift. Instead of asking: What is the best PLM software to buy today?

Companies should ask: What is my long-term data strategy, and how will it ensure continuity for decades?

Resilience does not come from locking into the biggest vendor or adopting the flashiest new technology. It comes from building a future-proof data layer—independent, connected, redundant, and semantically rich—that can support today’s applications and tomorrow’s innovations.

The message is simple: software dies, but data can live. The companies that embrace this truth will not only preserve their engineering knowledge but also unlock the full potential of AI and digital transformation in the decades ahead.

Just my thoughts… 

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased.

Recent Posts

Also on BeyondPLM

4 6
20 April, 2019

Continue to share my reflections after COFES today, I want to touch CAD/PLM mergers and acquisitions (M&A). COFES is a...

22 December, 2017

Every PLM implementation means lot of integration services. I hardly can remember at least one PLM implementation that was done...

23 March, 2011

What do you think about the role of standards in CAD and PLM? Some of recent development in Open Standards...

22 March, 2011

It is seems to me PLM definition is a trending topic these days :). Few hours ago, I posted my...

21 March, 2011

The definition of what is Product Lifecycle Management is one of the most controversial topics in PLM blogosphere and other...

5 October, 2010

The following blog article drove my attention yesterday: CAD File Management ≠ PLM. The short blog post published by Peter...

16 April, 2012

COFES 2012 became special to me. It is almost four years since I started my daily blogging on Daily PLM...

25 March, 2011

Almost a half year ago, I had a chance to grab a coffee with GrabCAD’s Hardi Meybaum in one of...

23 February, 2016

PLM products are insanely similar. Two decades of competitions between small number of vendors and multiple acquisitions made PLM landscape looks...

Blogroll

To the top