Every time the industry says “this is the big shift,” I become cautious.
I have lived through the last 30 years of platform transitions in CAD and PLM to know that not every technological wave rewrites the market. Some waves make noise. Some change deployment models. A few, however, reshape workflows,business models, and the winners’ list entirely. AI feels likenowl one of those moments.
But before we rush into prompt engineering and copilots for BOMs, it is worth stepping back and asking a more uncomfortable question:
What actually constitutes a real platform shift in engineering software? And what does it take to win one?
We Have Seen Platform Shifts Before
The CAD/PLM industry did not emerge fully formed. It evolved through structural transitions, each one tied to a broader computing platform change.
When computing power became affordable and accessible, parametric modeling emerged. PTC didn’t just add features; it leveraged processing capability to redefine how engineers thought about geometry. Models became associative, features became intelligent, and design intent became persistent. That was not incremental improvement. It was a new way of working.
When PCs and Windows matured, drafting moved from specialized hardware to general-purpose machines. Autodesk made drafting simpler on PCs. SolidWorks embraced Windows UX and made 3D modeling approachable to a broader engineering audience. Ease of use became a growth engine.
Later, enterprise data management became the battleground. Teamcenter and its predcessors institutionalized the system-of-record model. Windchill leaned into bringing a web architecture into the PLM business. The promise was governance, traceability, control. Engineering data was no longer just geometry; it became enterprise capital.
If you look closely, each successful shift had three ingredients. A platform change. A simplification of workflow. And a distribution strategy aligned with the moment.
That alignment is what made the difference.
Cloud Was Not the Shift We Thought It Was
When cloud computing and mobile platforms reshaped SaaS, many expected engineering software to follow the same pattern. It did not.
Yes, licensing models changed. Perpetual turned into subscription. Vendors moved infrastructure into hosted environments. The word “SaaS” became part of every pitch deck.
But if we are honest, most CAD still runs on desktops. Most PLM still behaves as an on-premise or hosted system of record. Engineering workflows did not fundamentally change because of the cloud. Look at the PLM market 15 years ago when Carl Bass from the stage of Autodesk University made a “cloud” announcement and you will see that nothing changed in the big PLM world. Cloud altered monetization far more than it altered engineering practice.
That lesson matters and I was learning it for the last 15 years. Because it tells us something critical about cracking the future of PLM AI.
The next platform shift will not be won by deployment models. It will be won by those who redefine value creation and value realization.
Why AI Is the Next Real Platform Shift in PLM
AI is not another feature. It is not another search layer. It is not a chatbot attached to a document repository. It is a shift in how software thinks, acts, and delivers outcomes.
Every CAD and PLM vendor is currently trying to “figure out AI.” Some are experimenting with copilots. Some are building semantic search over engineering documents. Some are embedding generative summaries in change management workflows. I shared my thoughts about it in my article – AI in PLM: What Are Vendors Really Delivering, What Did I Learn, and Is the Hard Problem Still Unsolved? These are interesting experiments. But they are not yet platform shifts.
A more disruptive question is emerging beneath the surface:
If AI agents can reason over engineering data, orchestrate workflows, and even generate code, what happens to the traditional PLM interface? What happens to seat-based licensing? What happens to vendors if customers can increasingly “vibe code” their own lifecycle tools?
Part of me remembers AutoCAD and AutoLISP. Users extended their systems. They automated repetitive tasks. They built domain-specific tools on top of a platform. Vendors that embraced extensibility benefited. Vendors that resisted often struggled. I remember vendors that built their value on top of AutoCAD platform – open and extensible. The same happened to vendors that developed on top of SolidWorks and some other open tools. AI agents could represent that moment again, but at a much larger scale.
So the real question is not “How do we add AI to PLM?” It is “How do we architect PLM so AI becomes a structural advantage rather than a thin wrapper?”
From where I stand, three macro forces will determine who cracks the AI code in PLM.
Proprietary Data Is the Only Sustainable Moat
In a world where foundation models are widely available, wrapping a generic LLM around your UI is not differentiation. It is table stakes. The only sustainable alpha is proprietary data.
I can see two possible plays here: the data that PLM systems control today, and the “wild” engineering data that no one truly owns because of the limitations of PDM and PLM tools. Let’s talk about both.
PLM systems sit on extraordinary lifecycle intelligence. BOM evolution across years. Engineering changes history. Configuration baselines. Supplier performance. Compliance documentation. Quality escapes. Warranty signals. Field feedback loops.
And yet, most PLM systems treat this data as archived states rather than strategic intelligence.
What would it mean to treat lifecycle data as a predictive asset rather than a passive repository?
What would it mean to build lifecycle knowledge graphs that understand relationships across mechanical, electrical, and software artifacts? To detect patterns in change cycles? To forecast the downstream impact of a modification before it reaches production?
Without deep ownership of proprietary lifecycle data, AI in PLM becomes shallow. A conversational interface over static documents. With proprietary data, AI becomes contextual, predictive, and enterprise-specific. It becomes embedded in how decisions are made.
This is not limited to PLM vendors. It applies to startups and incumbents alike. If you do not capture, normalize, and continuously enrich proprietary engineering data, your AI strategy will remain fragile.
Here is another uncomfortable set of questions: How much CAD data is not controlled by PDM systems today? How many SMB companies keep their CAD files in Google Drive, Dropbox, or simply on network drives? How many BOMs are managed using Excel and other spreadsheets or legacy databases? How many change requests live in spreadsheets? How many contractor and supplier RFQs and related data are unmanaged or exchanged through Google Drive or PDFs?
Think about how much data owned by SMB and SME engineering and manufacturing teams is completely uncontrolled. And even more importantly, how much data that is controlled by PLM gets exported to Excel or other derivative files just to be shared with users, because the love for PLM is practically nonexistent outside engineering organizations?
This is not legacy. This is lost intelligence and workflow efficiency.
So, the real moat is not the model. It is the data gravity.
How AI Undermines Traditional PLM Revenue Models
There is another uncomfortable reality that AI introduces. Most PLM vendors quietly adopted SaaS subscription models over the past decade. But fundamentally, they retained seat-based licensing logic. Users multiplied. Revenue expanded.
AI agents complicate that dynamic.
If AI can automate tasks that previously required human interaction, the number of seats does not necessarily grow. In some scenarios, it might even shrink. Fewer clicks. Fewer manual interventions. More automation.
If value increases while seat count remains stable, the traditional SaaS equation begins to crack. So what replaces it? Check my article from yesterday article From SaaS to Pay-Per-Data: AI Is Rewriting PLM Economics.
I believe we will see two experiments emerge.
First, outcome-based value alignment. Instead of charging for access to a system, vendors tie pricing to measurable improvements. Reduced change cycle time. Fewer compliance escapes. Lower supplier risk exposure. Faster product releases.
Second, data consumption or credit models. Customers purchase AI capacity. Predictive simulations consume credits. Impact analyses consume credits. Autonomous change validations consume credits.
In both cases, pricing shifts from access to data and intelligence.
This transition will not be easy. It challenges long-standing revenue assumptions. But ignoring it is risky. AI agents undermine the logic of per-seat expansion. Vendors who rethink monetization early will be better positioned.
The question is not whether revenue models will change. It is who will have the courage to experiment first.
Why Click-to-Aha Speed Defines PLM UX in 2026
There is a simple truth we rarely acknowledge. No one wants to buy PDM, BOM management or PLM software. They want to solve a problem.
Today, the ratio of effort to visible value in PLM is often extreme. Hundreds, sometimes thousands of clicks before the return becomes obvious. Complex implementations. Long deployment cycles. Heavy configuration layers.
AI introduces an opportunity to radically compress that path.
Imagine change impact analysis made on CAD data even before engineers are getting into a new design stage. Imagine drawing reviews made automatically. Imagine BOM mistakes found earlier in the process even before they approached the procurement teams. Imagine a change request automatically evaluated against historical patterns and supplier performance before it even reaches approval. It can dramatically change the entire ECR/ECO/ECN model and the way CCB (Change Control Board) works. Imagine BOM compliance risks surfaced contextually, not discovered weeks later. Imagine lifecycle impact analysis presented in a clear, narrative explanation rather than buried in multiple modules.
The vendor who reduces the click-to-aha ratio from thousands to single digits will redefine user expectations.
And here is the deeper implication: speed to insight is not just a UX improvement. It is a strategic moat. Once customers experience immediate lifecycle intelligence, reverting to slow, manual workflows becomes unacceptable.
So I ask vendors directly: are you optimizing for feature completeness, or for time to clarity?
Because the next generation of PLM buyers will reward the latter.
Distribution Will Decide the AI PLM Winner
History offers another lesson – platform shifts are never won by technology alone.
Autodesk’s rise was tied to PC democratization and channel distribution. SolidWorks scaled through Windows adoption and VAR networks. Teamcenter provided a stable predictable governance for enterprise companies. Aras offered a free platform with predictable upgrades via subscription models. Several PDM/PLM platforms succeeded to grow via their attachment to the loyal CAD customer base.
Technology mattered. But distribution and go-to-market alignment amplified the effect. I think, AI will follow the same pattern.
Today, enterprise PLM remains entrenched in large organizations with heavy governance requirements. At the same time, SMBs and mid-market manufacturers often rely on Excel, shared drives, and general-purpose tools to manage product data.
AI creates two distinct opportunities.
In enterprise contexts, proprietary data combined with rapid insight generation can embed AI deeply into existing lifecycle gravity. Vendors that leverage existing data repositories and reduce friction can expand influence without requiring wholesale replacement.
In SMB contexts, AI-first lifecycle tools could leapfrog legacy PDM entirely. If you can deliver clarity in minutes rather than months, you can replace spreadsheets and ad hoc processes without massive IT involvement.
But neither path succeeds without thoughtful distribution strategy. Who is your buyer? Who experiences the aha moment? Who champions the shift internally?
Everyone is trying to crack the PLM AI code right now. But few are asking the distribution question seriously enough. Who controls the data? Who owns the relationship? Who experiences value first?
That alignment will determine the winner more than any AI demo.
What is my conclusion and what does this mean?
If AI is truly the next platform shift for CAD and PLM, then the conversation must move beyond chat interfaces and marketing slogans.
The real levers are structural. Here are my 3 things to lead with:
- Own proprietary data and turn it into automated workflows and predictive intelligence.
- Rethink revenue models and offer a path to grow without seat expansion logic.
- Collapse the click-to-aha ratio so value becomes visible immediately.
And one more – you must align AI architecture with distribution and GTM strategy rather than treating it as a feature add-on.
I do not believe the AI opportunity in PLM is limited to one vendor or one architecture. But I do believe that shallow implementations will quickly commoditize. Deep, data-centric, workflow-embedded intelligence will not.
Here are the most important questions I can think about.
Are you building AI on top of a system of record (so called PLM intelligence)? Or are you transforming your proprietary data into a system of impact?
Those who answer that question thoughtfully over the next few years will not just “add AI.” They will redefine the next generation of PDM and PLM platforms.
And as history shows, platform shifts do not wait for consensus. They reward those who move with architectural clarity. The rest become dinosaurs. It happened to a few vendors in the past.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.
