Here is a small signal that exposed a larger problem. If you’ve been living under the rock for the last 10 days, Wall Street just lost $285 billion because of 13 markdown files. For the last week, I’ve read tons of analysis about this event. Also, I can see how my friends and colleagues added stock decline of some leading CAD and PLM vendors to this event. I think, the jury is still out. Here are a few articles I can recommend you – Death of Software… Nah and Something Big Is Happening. You can search for everything else, but don’t miss those two, in my honest opinion.
Every technology transition has a moment when something small makes a much larger structural change visible. The recent discussions around the “SaaSpocalypse” and AI-driven repricing of enterprise software were one of those moments. A relatively simple AI workflow demonstrated that work previously tied to expensive tools and specialized interfaces could be reproduced with far less human interaction. Markets reacted quickly, but the real story was not about AI replacing software. It was about AI exposing assumptions that have shaped enterprise software economics for decades.
This matters directly for the PLM future.
For more than twenty years, enterprise software has been priced and designed around human interaction. Users log in, navigate screens, execute workflows, and software vendors charge per seat. The emergence of AI in PLM and across enterprise software challenges this model not because software becomes unnecessary, but because humans are no longer the bottleneck in many forms of knowledge work.
If enterprise SaaS is being repriced around this idea, product lifecycle management AI is not an exception. The same forces that drive enterprise software AI transformation are beginning to reshape PLM architecture, engineering data management, and the way product knowledge is organized.
The SaaSpocalypse Misunderstanding
The dominant narrative around SaaSpocalypse suggests that AI threatens software itself. That interpretation misses the point. Software is not disappearing. Data systems, domain logic, and enterprise accountability remain essential. In many cases, AI increases dependence on them.
What is changing is the delivery model.
AI compresses the cost of interaction. Tasks that previously required engineers navigating complex interfaces can now be prepared, analyzed, or executed by AI agents operating directly on data. When interaction becomes cheaper, pricing models tied to interaction become fragile.
The same pattern appears in PLM transformation. Historically, PLM systems have monetized human interaction with product data. Engineers search for information, compare revisions, and route changes through workflows designed around people moving tasks between one another. The growing AI impact on engineering workflows challenges this assumption.
The question is not whether PLM survives. The question is how AI changes product lifecycle management and how PLM architecture evolves when interaction is no longer the primary source of value.
What Is Defensible in CAD and PLM
Before discussing change, it is important to understand what does not change.
PLM systems exist because complex products require structured knowledge. This includes CAD design and engineering knowledge, product structures and relationships (eg. xBOM data), configuration logic, and the structured history of decisions that explain how a product evolves over time. These elements represent accumulated organizational knowledge that cannot be easily recreated.
This is the defensible core of PLM.
The defensible part of PLM is not its screens or workflows, but the structured memory of the product itself. The product memory concept in PLM describes how CAD models encode design intent, BOM structures encode relationships, and change history preserves reasoning and accountability. Together, they form a persistent memory that allows companies to understand what they built, why they built it, and how changes propagate across engineering and manufacturing.
AI strengthens this layer rather than weakening it. As AI workflow automation in engineering increases, organizations need stronger traceability and context. When AI agents assist engineering decisions, the need for engineering data model as a system of record becomes even more critical. The digital thread becomes not just a concept, but a requirement for safe automation.
The Weak Layer — UI as the Product
Where PLM becomes vulnerable is not in its data, but in how that data is accessed.
A large portion of PLM activity today revolves around user interfaces. Engineers search through screens, navigate hierarchies, fill forms, and manually assemble context before making decisions. Implementation and services work frequently focus on configuring these interfaces to match organizational roles and processes. Much of traditional PLM architecture assumes that work happens through screens.
This model assumes humans must perform the interaction.
AI challenges that assumption directly. AI agents engineering workflows can retrieve data, summarize impact, compare structures, and prepare change context without navigating predefined interfaces. Generative and adaptive interfaces begin replacing static UI configurations. Interaction becomes dynamic rather than designed in advance.
This is where the economics change. When interaction becomes inexpensive, the value of building and maintaining complex UI structures declines. A significant portion of PLM services tied to UI configuration and engineering workflow automation around human interaction becomes difficult to justify.
UI does not disappear, but it stops being the center of the system.
The Weak Layer — Human-Oriented Workflows
The second fragile layer is workflow design.
Traditional PLM workflows are human-oriented. Changes are submitted, routed, reviewed, approved, and released through a sequence of steps designed primarily to coordinate people. Many workflows exist to compensate for human limitations rather than product logic.
AI changes the economics of coordination.
Validation can become continuous instead of sequential. Change packages can be prepared automatically. Impact analysis and completeness checks can be executed before a human review begins. The role of humans shifts toward supervision and judgment rather than routing work.
Workflow engines themselves remain important. Products still move through lifecycle states. Policies still need enforcement. But workflow changes from task routing to governance of automated actions. This reflects a broader PLM vs AI agents transition, where systems increasingly orchestrate automated preparation rather than human movement.
The center of gravity moves from process orchestration to policy-driven execution.
From UI-First PLM to Agent-First Product Work
These changes lead to a structural transition in the future of PLM architecture.
In traditional PLM systems, humans operate the system through interfaces. Workflows represent the movement of tasks between people. The system organizes interaction.
In emerging models, agents operate on product data under defined constraints. Policies govern actions. Humans supervise outcomes instead of executing every step. This is the beginning of AI-native PLM, where systems are organized around data and action rather than navigation.
Adding AI features to existing UI does not fundamentally solve this shift. Enterprise software AI transformation requires architectural change. The system evolves from something engineers operate to something that assists and prepares work continuously. Think about missing human layer in data (check my article – Missing Human Layer in BOMing)
The difference is subtle but profound: systems move from being operated to being supervised.
The AI-Native Product Workspace
This transition leads to what can be described as an AI-native product workspace.
In this model, product memory becomes the center of the system. Around it exists a collaborative workspace layer where context, discussions, and decisions are captured. AI agents perform preparation, analysis, and execution within defined constraints. Humans provide supervision, judgment, and accountability.
The AI-native workspace is not simply a new interface. It represents a shift in how product lifecycle management AI organizes engineering work. Data becomes the primary interface. Interaction emerges from context rather than predefined screens. Decisions and their rationale become part of the digital thread and product memory itself.
This aligns naturally with the evolution of engineering collaboration platforms and the broader product data management future, where systems maintain continuous product understanding rather than static records.
What Gets Repriced in the PLM Economy
If interaction and coordination become cheaper, parts of the PLM ecosystem inevitably change.
Activities centered around UI customization, workflow routing, and training users to navigate complexity begin to shrink. Large portions of implementation work that exist primarily to align human interaction with software structure lose economic justification as AI workflow automation engineering improves.
At the same time, other areas grow in importance. Product data integrity becomes critical. Governance and accountability increase in value as automation expands. CAD BOM integration and cross-system connectivity become more important because AI agents rely on consistent and connected data sources. Traceable automation becomes a differentiator in manufacturing digital transformation initiatives.
The value shifts from managing interaction to preserving and governing knowledge.
What This Means for Vendors and Buyers
For PLM vendors, the implication is clear. Value moves away from owning the interface toward owning trusted product knowledge and enabling safe execution. AI-native PLM systems must prioritize data continuity, governance, and adaptability rather than interface complexity.
For buyers, evaluation criteria change as well. The question becomes less about how many users the system supports and more about how effectively the system maintains product context, supports engineering collaboration, and enables automation without losing traceability.
The systems that succeed will be those that allow organizations to evolve workflows without rebuilding product memory.
What Is My Conclusion?
Despite the dramatic title, this is not an end-of-industry story. PLMarmageddon isn’t the end of PLM. PLM is not disappearing.
The need to manage complex engineering data and product knowledge continues to grow as products become more complex. What is changing is the operational layer built around human interaction and coordination.
The systems that survive this transition will be those that recognize where their real value lies. Not in screens. Not in workflows. But in preserving the structured memory of the product and enabling that memory to drive action safely across the digital thread.
The industry is moving from systems humans operate to systems humans supervise. The SaaSpocalypse exposed a pricing problem in enterprise software. PLMarmageddon, if it comes at all, will simply be the moment PLM completes its transformation from managing interaction to managing knowledge.
And that transformation defines the real future of PLM.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.
