It is Sunday morning and I’m taking a flight to Houston to attend the 3DEXPERIENCE World 2026. The event will take place in the next few days at the George R. Brown Convention Center in Houston, Texas. The event that was known as SolidWorks World created one of the most impressive community of mechanical engineers around the now iconic SolidWorks CAD that changed the way engineers design products using the Windows operating system. Nowadays, the event presented as a community focuses on accelerating innovation through AI, generative design, and the 3DEXPERIENCE platform, featuring hands-on training, expert sessions, organized by Dassault Systemes. For me, it is still one of the best places to meet former colleagues, customers and friends for many years.
Yesterday, I listened to a long conversation with Marc Andreessen about AI and why he believes the real AI boom hasn’t even started yet. You can check the video here coming from Lenny’s blog – one of the best sources of information I found for product management these days.
What struck me wasn’t any single prediction or technical detail, but the way he framed the last fifty years of technology and the imbalance that quietly formed along the way.
His argument isn’t really about models, startups, or hype cycles. It’s about where intelligence accumulated, and where it didn’t. Software raced ahead because computation, iteration, and coordination became cheap. The physical world, such as engineering, manufacturing, and infrastructure, moved far more slowly, not because people lacked ideas, but because the process reasoning in those domains remained expensive, fragile, and difficult to scale.
As I listened, I kept mapping his points back to engineering software and product lifecycle management. Not as a question of where to add AI features, but as a more structural reflection: what kinds of systems did we build when reasoning was scarce, and what kinds of systems become possible when reasoning becomes abundant? It was somewhat a continuation of my earlier blog – The Changing Role of PLM Consulting and Blogging in the Age of Cheap Intelligence.
PLM sits uncomfortably close to the center of that question. It was created to manage complexity, but it evolved in a world where complexity is frozen, controlled, and carefully rationed. AI challenges that assumption, not by automating workflows, but by changing the cost of thinking itself. As Marc compared in his conversation – Isaac Newton, famously known for his work in physics and calculus, was deeply involved in alchemy for decades, actively seeking a method to turn base metals like lead into gold. While his, and other historical, attempts failed to produce gold, we can see now how modern AI tools are kind of converting “sand (silicon) to thinking”.

To understand why this matters, it helps to step back and look at how computing shaped the systems we rely on today.
A Short History of Computing, Processes, and Tools
A long time ago, a process was defined by documents. Handwritten and drafted, first it stuck as a mechanism to get work done. Typewriters became a mechanical automation tool first. Not everyone was using them. I remember the time when you should have been requesting the document to be typed.
Computers stepped into this process and democratized the process of typing. Now,everyone could have been able to type a document and send it to someone (using email).
Engineering computing began as a tool for calculation. Early machines replaced human arithmetic with speed and precision, transforming science, defense, and finance. They did not change how decisions were made; they simply allowed calculations to be performed faster.
Personal computers expanded this capability into everyday work. Documents became digital. Drawings moved from paper to screens. Information could be stored, copied, and edited more easily, but computers still functioned primarily as systems of representation. They displayed information and enforced basic constraints, but they did not reason about outcomes.
The internet shifted the center of gravity again. Coordination became cheap. Information moved instantly across organizations and continents. Software development accelerated because iteration became low-risk and reversible. Mistakes could be corrected with updates instead of recalls.
Cloud computing completed this progression. Execution at scale became routine. Storage felt infinite. Computers became elastic. Entire industries reorganized around the assumption that software could change continuously.
Across all these waves, software productivity compounded. The cost of iteration dropped. Feedback loops shortened. Learning accelerated.
The physical world did not follow the same trajectory.
Engineering, manufacturing, and infrastructure remained constrained by physics, materials, certification, supply chains, and long lead times. Decisions made early were difficult to revisit. Errors were expensive. As a result, physical systems evolved slowly — not because ideas were lacking, but because reasoning in these domains remained scarce and brittle.
We digitized artifacts, but not reasoning. We automated execution, but not judgment. That gap shaped the systems we still use today.
Why PLM Emerged the Way It Did
PDM and PLM emerged in an era where computation was scarce, storage expensive, and reasoning limited. Its primary focus was controlling and managing CAD files and the associated product data.
This design choice was rational. As products became more complex, organizations needed a way to ensure traceability, manage revisions, and prevent costly mistakes. Document history, access control, and release processes were safeguards, not accidents.
A clear separation emerged. CAD systems became the space for creativity and innovation. PDM and PLM systems became the space for governance and control.
The underlying assumption was that design creativity belonged outside the PDM/PLM environment, while PLM was intentionally constrained to revision management. Once documents were released, they became history, and revisiting decisions was considered complex and risky.
Given the tools available at the time, this was a sensible compromise.
The Structural Limits of Traditional PDM and PLM
Over time, the limitations of this approach became more visible.
Traditional PLM software systems optimized for document control and process enforcement, not for understanding intent or preserving decision context. Product structures were frozen early, not because they were fully understood, but because change became increasingly expensive.
Decisions were captured indirectly as document snapshots. The reasoning behind those decisions — alternatives considered, tradeoffs evaluated, assumptions made — was rarely recorded in a structured way. Conversations happened in meetings, emails, and hallway discussions, then disappeared.
As products moved from engineering to manufacturing, and later to service, context was routinely lost. Each lifecycle stage inherited artifacts without fully understanding why they looked the way they did. Change processes existed to control disruption, not to support exploration.
PLM became very good at remembering what was released. But, at the same time it struggled to remember why it was done.
A Useful Provocation About AI
Andreessen’s provocation is not that AI automates engineering work, but that it makes reasoning itself abundant. For the first time, machines can operate inside constrained, verifiable domains — mathematics, physics, engineering tradeoffs — at a scale that was previously impossible.
Earlier computing waves improved execution. AI alters cognition.
When reasoning becomes cheap, the bottleneck in engineering shifts. The constraint is no longer the ability to calculate or store information, but the ability to frame decisions, explore alternatives, and preserve context over time.
That shift has consequences far beyond individual tools.
Recomposition of Roles — Individuals and the Lifecycle
Discussions about AI often focus on the recomposition of individual roles. Engineers take on tasks that once required specialists. Designers code. Product managers prototype.
But the same recomposition is happening at a larger scale, across the entire product lifecycle.
For decades, engineering, manufacturing, and service operated as deliberately siloed functions. Engineering defined the product and through it “over the wall to the manufacturing people”. Manufacturing figured out how to build it. Service dealt with the consequences. Information flowed between these domains through formal handoffs, usually as frozen documents.
That separation is increasingly untenable.
Cost, quality, reliability, sustainability, cybersecurity, and serviceability can no longer be optimized independently. They emerge from tradeoffs that span the full lifecycle. Meaningful decisions now require visibility across domains that were previously isolated.
At the same time, engineering itself has changed. Mechanical, electronics, and software disciplines are no longer loosely coupled. Modern products are systems, not assemblies. Decisions in one domain immediately affect others.
AI accelerates this recomposition by lowering the cost of cross-domain reasoning. What was once rare and specialist-driven becomes mandatory.
Recomposition of Product Structures
As roles recombine, product structures must follow.
Historically, product structures mirrored organizational silos. EBOM described design intent. MBOM reinterpreted that design for production. Service BOMs were reconstructed later. SBOMs emerged largely for compliance.
Each structure existed independently, optimized for a specific phase and audience (and often managed in a separate system). Transitions between them were manual and lossy.
In a connected environment, this fragmentation becomes a liability.
EBOM, MBOM, SBOM, and Service BOM are not alternatives. They are perspectives on the same evolving system. Intelligence emerges from the relationships between them, not from any single representation.
This is the foundation of xBOM thinking: not a new BOM type, but a recognition that modern products require multiple, connected structures that coexist and inform one another.
How and Why PLM Becomes the Landing Zone for AI
PLM becomes the natural landing zone for AI not because it owns data, but because it owns change.
Every significant decision in engineering and manufacturing eventually surfaces as a change — a design modification, a supplier substitution, a software update, a manufacturing adjustment. These are intersections of intent, constraints, risk, and consequence.
Traditionally, change management was treated as a control problem. Engineering Change Orders (ECO) enforced discipline: propose a change, route it for approval, lock the result.
In a recomposed world, this model breaks down.
A single change now touches multiple product views simultaneously. In this context, an ECO is no longer a document or a workflow. It is a decision graph.
AI enables exploration before commitment. Alternatives can be evaluated. Impacts across EBOM, MBOM, SBOM, and Service BOM can be analyzed in parallel. Tradeoffs become explicit.
For this to work, PLM must preserve not only what was approved, but why it was chosen and what was rejected. That information becomes product memory.
This is where AI belongs: inside the change process, tracing dependencies, surfacing conflicts, and preserving reasoning across time.
What is my conclusion?
We are going to see the transformation of PLM software and system thinking. The storage is getting commoditized. SaaS started the process of convergence and this is where modern CAD, PDM and PLM software is struggling. SolidWorks and 3DEXPERIENCE are the best examples of this struggle. Check my previous blog – SOLIDWORKS, PLM, AI, and Déjà Vu: What I’m Watching at 3DEXPERIENCE World 2026 in Houston.
The biggest challenge is a transformation from “file/desktop” thinking to the way “connected” cloud systems will perform. Solidworks is a massive ecosystem with a desktop DNA and it still struggles to convert to data platform thinking. And this is what makes it unique for the understanding of the AI boom ahead of us.
The moment of time we break through this barrier of capturing data and connecting people, we also unlock the future productivity and design intelligence. From system of record to system of understanding (and decision). Seen this way, the future of PLM is not about smarter workflows or faster approvals. It is about continuity of reasoning.
PLM can evolve from a system of record into a system of understanding and decision — preserving intent, alternatives, and decision context as product memory across the lifecycle.
AI does not replace human judgment here. It participates in it.
The real AI boom will not be defined by better answers to isolated questions, but by whether we allow reasoning to follow products across time instead of disappearing at every handoff.
That is not a technology inevitability. It is a design choice. And PLM sits at the center of that choice.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.
