Fifteen years ago, the PLM world was obsessed with a single phrase: “true cloud.”
If you attended a conference back then, you probably remember the debates. Every vendor had a “cloud” strategy, yet no two meant the same thing. Some merely hosted their old client–server software on rented infrastructure and declared victory. Others tried browser-only front ends but kept the same monolithic databases behind the curtain. A few bold players – true multi-tenant SaaS pioneers – argued that unless the application was born in the cloud, it could never deliver the speed, cost savings, and upgrade agility customers expected.
For manufacturing companies trying to modernize, this was maddening. The promise of the cloud was clear – fewer upgrades, easier collaboration, lower IT overhead – but the label “true cloud” became a marketing weapon, not a technical guarantee. Buyers had to peel back layers of jargon to discover what they were really buying.
Today, I sense the same drama starting to unfold again.
The new magic phrase on everyone’s lips is “AI-native PLM.”
You hear it in keynotes, read it in investor decks, and see it sprinkled across product pages. But just like with “true cloud,” ask ten people what AI-native means, and you’ll get ten different answers. For some vendors, it means a chatbot-style assistant sitting on top of the existing system. For others, it’s an ambitious vision of self-learning workflows, compliance engines that reason about regulations, or procurement agents negotiating with suppliers.
The déjà vu is impossible to ignore.
But behind the slogans, something real is happening – a deeper shift in the very nature of software computation and data management architectures. That shift is bigger than the cloud revolution because it’s not just about where software runs and how it stores data; it’s about how it runs queries and “thinks”
Let’s talk about it an try to separate marketing slogans form technical architecture.
Remembering the “True Cloud” Years
The early 2010s were full of excitement and confusion. Traditional PLM suites had been built for on-premises deployment: big relational databases, thick desktop clients, and IT teams managing upgrades that could take a weekend or more. Then came the cloud wave. Vendors saw the cost savings and speed of SaaS in other industries and wanted in.
Unfortunately, moving a twenty-year-old PLM codebase to the cloud was not as simple as flipping a switch. Some companies chose the easy path: they hosted their software on AWS or Azure, called it cloud, and left everything else unchanged. The business model has switched to subscriptions, but very often the nature of PLM software for enterprise pulled multi-year subscription deals, but those multi-year subscriptions often reminded perpetual license plus annual maintenance – often stayed the same. Customers didn’t see the promised agility.
A smaller set of companies built multi-tenant SaaS PLM from the ground up. They rewrote the architecture to make upgrades seamless and collaboration instantaneous. But because the term “cloud” covered everything from hosted servers to SaaS, buyers often couldn’t tell the difference. Thus began the debate over what counted as “true cloud.”
That debate was painful at the time, but it taught the industry an important lesson: when marketing labels appear before architectural clarity, confusion and disappointment follow.
A Different Kind of Shift: Deterministic to Probabilistic
To understand why the AI era feels different, we need to look under the hood of PLM software.
Traditional PLM systems – PDM vaults, part catalogs, BOM hierarchies, workflows – were all built on deterministic computation. A relational SQL database stores the information, and when you ask for Part 1234, Revision B, you always get the same record. That predictability is the backbone of compliance, audit trails, BOM records and ERP transactions. It’s also the reason PLM integrations, though sometimes clumsy, could be trusted to behave consistently
Over the last decade, PLM vendors quietly evolved their data stacks beyond the single relational database. We began to see polyglot persistence:
- NoSQL stores for documents and catalogs that need to scale horizontally.
- Graph databases to model complex relationships among assemblies, requirements, and digital twins.
- Search engines and analytics databases to speed up reporting and insight generation.
The polyglot persistence brought the topic of eventual consistency. Read my earlier article – PLM single source of truth and eventual consistency.
These were important innovations, yet the fundamental nature of the software remained deterministic. Run a query two times; the system executes predefined logic and returns the same output every time.
Enter AI – particularly large language models (LLMs) and other machine-learning techniques. For the first time, we introduce probabilistic computation into the heart of enterprise engineering software.
Ask an LLM to summarize two conflicting change requests or to suggest alternate suppliers for a component, and the answer depends on its training data, its internal weights, the context window, even the phrasing of the prompt. Run the same request twice, and the output might vary. This is powerful because it allows reasoning, inference, and contextual understanding that deterministic SQL queries could never deliver. But it also challenges the trust assumptions PLM was built on.
That is the real paradigm shift – not the presence of AI features, but the collision between deterministic rule-driven systems and probabilistic reasoning engines.
Marketing Déjà Vu: “AI-Native” Meets “True Cloud”
Because this shift is subtle, it’s tempting for vendors to compress it into a slogan. Hence the rise of “AI-native PLM.”
The phrase suggests a clear boundary: either your PLM was born with AI in its DNA or it’s outdated. But just as with “true cloud,” the boundary is mostly rhetorical. I doubt even one of the vendors that call themselves “AI-native” have re-architected their entire stack around probabilistic computation. I doubt it is even possible or needed. Most have added AI-driven features on top of their existing engines – recommendation services, smart search, automated classification.
And that’s perfectly fine. In fact, it’s probably the most practical path forward. The trouble is that the slogan obscures what the software actually does. If we learned anything from the “true cloud” saga, it’s that buzzwords can create expectations that the underlying architecture cannot fulfill.
What AI Is Likely to Contribute
The real question isn’t whether a PLM system is AI-native; it’s what specific value and capabilities AI brings. Here’s where I see the biggest near-term impact:
Reasoning over structured and unstructured data.
Most engineering knowledge lives in documents – drawings, test reports, emails, regulatory notes. LLMs excel at extracting meaning from this messy corpus and linking it back to structured items in the BOM.
Automating classification and enrichment.
Anyone who has tried to normalize material properties, compliance flags, or supplier part numbers knows the pain. Machine-learning models can help by inferring missing attributes or spotting inconsistencies early in the process.
Conversational access to product memory.
Instead of navigating complex trees of part numbers and revisions, an engineer might simply ask, “Show me all fasteners approved for the new drone frame that meet EU REACH compliance.” AI can bridge human questions to deterministic data.
Agentic workflows across the lifecycle.
Imagine a digital agent that, upon seeing a design change, checks cost implications, suggests alternate suppliers, and alerts the compliance officer – without the engineer leaving their CAD environment. That’s the promise of agent-based PLM assistance.
Each of these use cases enhances PLM’s usefulness without discarding its deterministic backbone.
LLMs are not the new “heart” of PLM; they are an additional data model in the polyglot architecture – alongside relational, graph, and search engines.
Their role is to provide semantic reasoning and probabilistic insight, not to replace the transactional structures that keep products traceable and auditable.
A Polyglot Persistence and Layered Future
Looking ahead, I envision PLM stacks that deliberately blend these different computational worlds.
- Relational databases will remain critical for revisions, part histories, effectivity dates – anything that demands unambiguous records.
- Graph databases will handle product structures and the expanding digital thread that connects design, manufacturing, supply chain, and service data.
- Vector databases coupled with LLMs will add the semantic and probabilistic layer, enabling similarity search, contextual recommendations, and natural-language reasoning.
This blend is more than technical plumbing; it’s a cultural and process challenge. Companies will need to define governance frameworks – when can a probabilistic suggestion be trusted to make or propose a change? How do you audit a decision influenced by an LLM? How do you present probabilistic answers in a user interface designed for certainty?
The winners in this transition will not necessarily be the ones shouting “AI-native” the loudest, but those who orchestrate deterministic control with probabilistic intelligence in a trustworthy, explainable way.
Cutting Through the Buzz
Every technology wave begins with excitement, hype, and marketing creativity. Cloud, mobile, IoT, and now AI have all been introduced to the PLM vocabulary with the promise of revolution. But revolutions in enterprise software rarely come from slogans – they come from architectural and process shifts that solve real customer problems.
So when you hear “AI-native PLM,” pause and ask:
- What AI capabilities are actually in play?
- Which decisions or workflows become faster, safer, or more insightful because of them?
- How do these capabilities coexist with the core PLM data models and compliance requirements?
Those are the questions that matter far more than whether the product carries the latest label.
What Is My Conclusion?
Is AI a new transformative layer in PLM architectures that turns them into the “AI-native”?
The history of PLM’s cloud journey reminds us that technology labels can mislead. True cloud mattered not because of the name but because multi-tenant SaaS changed upgrade cycles, collaboration, and cost models.
Likewise, AI matters not because we can declare a system AI-native but because probabilistic reasoning introduces a new layer of value. It can turn decades of static engineering records into living, context-aware knowledge; it can bridge structured and unstructured worlds; it can suggest, guide, and even negotiate. But it does all this on top of the deterministic foundation that keeps the lifecycle accountable.
AI will not make existing PLM architectures obsolete. Instead, it will redefine what PLM is valuable for – moving it from a purely transactional backbone to a knowledge-driven decision-support environment.
AI is not replacing the DNA of PLM – it’s adding a probabilistic layer that, if governed wisely, can finally make PLM work the way engineers always hoped it would.
This perspective comes from years of watching technology labels rise and fall in the PLM industry. If the past is any guide, the most meaningful progress will come not from slogans but from practical, layered architectures that respect the strengths of both deterministic and probabilistic computation.
Let’s make sure we don’t repeat the mistakes of the “true cloud” era. Instead of chasing the newest adjective, let’s ask how each innovation truly reshapes the value proposition for engineers, manufacturers, and everyone who depends on PLM to build better products.
In my next article I want to dig into what can be (potentially) AI-native PLM and what are next steps in PLM development trajectories can lead us towards AI-native PLM software.
Just my thoughts…
Best, Oleg