A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

PLM’s OpenClaw Moment: How AI Agents Will Break Closed Systems

PLM’s OpenClaw Moment: How AI Agents Will Break Closed Systems
Oleg
Oleg
22 March, 2026 | 13 min for reading

Why the future of PLM depends on making product data accessible, understandable, and usable for AI agents

A few weeks ago, I published the article – PLM 2026: How to crack AI code?  The speed of changes related to AI these days is amazing. Today, I want to talk about the topic that was around PLM development for a long time – data openness. 

A Signal from Outside PLM

A few weeks ago, something caught my attention that had nothing to do with PLM on the surface. A new wave of tools started circulating in the AI community, the most talked-about being something called OpenClaw, an open-source AI agent (personal assistent) created by developer Peter Steinberger that runs locally on your own hardware and connects to messaging apps like WhatsApp or Slack, giving users a single agent capable of operating autonomously across multiple applications, services, and data sources. It went from a weekend prototype to over 200,000 GitHub stars in a matter of months, which tells you something about the nerve it struck. If you skimmed the headline, you might have filed it away as just another experiment in agentic tooling, interesting but distant from the world of product data management.

I want to argue that this reading misses the point entirely.

What OpenClaw and tools like it actually reveal is a deepening tension between how enterprise systems have been architected and how people increasingly expect to work. And for those of us who live in the PLM and manufacturing software world, this tension is not theoretical. It is coming for us, and it is coming faster than most organizations realize.

The real question OpenClaw surfaces is not about a specific tool. It is about whether product data, the kind that lives in your PLM system, your BOMs, your CAD vaults, your engineering change workflows, can actually participate in an AI-driven world. Or whether it will remain locked inside PLM systems that were designed in the 1990s for a completely different era.

The Illusion of Open PLM Systems

Here is something that rarely gets said plainly: PLM systems were never designed to be open in the way we need them to be today.

That statement requires some unpacking, because PLM vendors have been talking about openness for years. Ask every PLM vendor and they will tell you about how open their PLM software is. They have APIs. They have integrations. They have export pipelines and connector frameworks and partner ecosystems. All of this is real. But there is a kind of openness that these systems provide, and a kind of openness that the next generation of AI-powered workflows actually demands, and these two things are not the same.

Traditional PLM openness is essentially a controlled data transfer mechanism (aka data sync). Data moves from one system to another through carefully defined pathways (sometimes very painfully). The PLM system remains the center of gravity. It dictates the schema, enforces the workflow, and mediates access. You can get data out, but only in the ways the system allows, and only in forms the system has chosen to expose.

The problem is that data can be technically accessible and still be practically unusable.

Think about what actually lives in a PLM system. Yes, there are BOMs, drawings, part numbers, revision histories. But there is also something harder to capture: the reasoning behind the decisions. Why was this component chosen over an alternative? What was the context behind an engineering change order? What constraint drove a particular design direction? That knowledge is not in a field somewhere. It lives in the heads of engineers, in email threads, in meeting notes, in the institutional memory of people who have been doing this for twenty years. If you’re interested in this topic, check my earlier article about Context Graahs: PLM Beyond System Of Records. 

A file tells you what exists. It rarely tells you why it exists.

This gap between data and understanding is not a new problem. But it becomes a critical problem the moment you try to use AI agents to work with product information, and that moment has arrived.

AI Agents Change the Definition of Openness

If you want to understand why AI agents challenge PLM architecture so fundamentally, you have to start with how agents actually work, and how that differs from how humans work.

When an experienced engineer navigates a PLM system, they bring enormous context with them. They know what conventions the team uses. They understand which fields matter for which workflows. They can interpret ambiguous data based on years of domain experience. They fill in gaps, make reasonable inferences, and reconcile contradictions through judgment. The system does not have to be complete because the human brings so much to the interaction.

AI agents cannot do this. Not in the way humans can. They require structured relationships, clear semantics, traceable history, and consistent context. They cannot rely on tribal knowledge that exists outside the system. They cannot interpolate meaning from incomplete records. When the information is ambiguous or disconnected, the agent either fails or, worse, generates confidently wrong output.

This fundamentally changes what “open” means in a PLM context.

Openness used to mean: can data be accessed? Today, it needs to mean: can data be understood? A system with APIs that expose data lacking meaningful relationships is not open for AI purposes. A system with robust export functionality that strips context in the process is not open for AI purposes. If an agent cannot reason about what the data means, the system is effectively closed, regardless of what the technical documentation says about connectivity.

This is the gap that is quietly widening in most PLM environments today.

From Blocking Bots to Enabling Agents

To appreciate how strange this moment actually is, it helps to remember what the previous two decades of enterprise software was about.

For a long time, automation in enterprise contexts was treated primarily as a risk. Systems were designed with rate limits, layered authentication, restricted API scopes, and active defenses against automated access. I’m sure you’re familiar with the term – “ERP police” controlling every API and operation. The logic was sensible: API calls and bots were something external, often malicious, that you defended against. Controlled, human-mediated access was the ideal. The system existed to serve deliberate human workflows, and anything that bypassed those workflows was at best a compliance headache and at worst a security vulnerability.

This entire logic has now inverted.

The “bot” we are designing systems to defend against is the same thing we are now trying to build. We call it an agent. We are investing heavily in it. We expect it to become a primary way of interacting with software. And suddenly, all the architecture designed to protect systems from automated access is creating friction for the exact capability we are trying to enable.

This is not a minor update to existing design philosophy. It is a reversal. PLM systems, more than most enterprise software categories, were built deeply around the idea of controlled, human-mediated access. The workflow is the product. The system owns the process. That design philosophy served the industry well for decades. But it is now in direct conflict with where AI-powered work is going.

The Bypass Effect: How AI Routes Around Closed Systems

Here is what I think will actually happen, and it is not the dramatic collapse that some people imagine.

Closed systems do not fail. They get bypassed.

Engineering organizations have been doing this for years, long before AI entered the picture. BOMs get exported to Excel because the PLM system is too cumbersome to query. Decisions get tracked in Slack and Emails because the workflow in the system does not match how work actually flows. Product knowledge migrates into shared drives using PDF archives, email threads, and the informal documentation practices of individual engineers. The PLM system remains present, but it stops being the place where understanding lives.

AI agents will accelerate this behavior dramatically. Rather than forcing themselves through rigid PLM interfaces, agents will do what makes sense: extract data directly from CAD files and technical documents, connect independently to ERP and procurement systems, reconstruct product structures outside the PLM environment, and build their own working model of what the product is and how it behaves. Over time, this external model will become more useful than the internal one, because it is actually connected to all the other systems the agent touches.

The risk for PLM is not obsolescence in a theatrical sense. The risk is something quieter and harder to reverse: PLM becomes a system of record in the narrow, bureaucratic sense, a place where official versions are filed, but not a place where understanding lives or where meaningful work happens. The agents will work around it, not against it. And eventually, the question will become: what is this system actually for?

The Strategic Decision: Open or Be Bypassed

At some point, this stops being a conversation about software architecture and becomes a strategic question for every company that manages product data.

The question is not which PLM system to buy or which AI vendor to partner with. Even not a decision of what PLM system has the best chatbot. The question is whether your product data should participate in AI workflows, or resist them. And the answer to that question has consequences that will compound over years.

A closed approach has genuine advantages. It offers control, predictability, and well-defined governance. For highly regulated industries, that matters enormously. But the tradeoff is that AI agents operating in a closed environment will work around the system, reconstructing context from scratch rather than building on what the system already knows. Every agent interaction starts from a weaker foundation.

An open approach, where product data is structured for understanding rather than just storage, allows agents to participate directly in engineering and manufacturing workflows. They can reason about tradeoffs, surface relevant history, flag inconsistencies, and connect information across domains. The product knowledge compounds rather than fragmenting.

The organizations that figure this out early will find themselves in a position where AI genuinely amplifies engineering capacity. The organizations that do not will spend enormous energy managing the gap between what their systems contain and what their agents need.

Why Openness Alone Is Not Enough

I want to be careful here, because there is a version of this argument that leads to a wrong conclusion.

If the problem is that PLM systems are too closed, the obvious answer seems to be: open them up. Expose more APIs. Build more integrations. Make the data more accessible. Problem solved.

But this misdiagnoses what is actually wrong.

Open data without structure creates fragmentation, not intelligence. An API that returns BOM data without the context of what decisions shaped that BOM is technically open and practically useless for an agent trying to reason about the design. The agent now has more data to be confused by. The noise goes up without the signal improving.

Access is not the same as understanding. And understanding is what agents, and frankly humans, actually need.

This is where most PLM openness discussions fall short. They focus on mechanisms for exposing data. They do not focus on making that data meaningful outside the context of the system that created it.

Product Memory: Turning Openness into Intelligence

This is where I think the concept of Product Memory becomes genuinely important, not as a marketing language, but as a design principle.

Product Memory is the idea that product knowledge should be organized so it can be understood over time, not just stored and retrieved. It captures not only the current state of a product but how that product evolved, what decisions were made and why, how components relate across engineering, manufacturing, procurement, and service perspectives, and how the product behaves in the real world. It is the difference between a snapshot and a narrative.

If you want to catch up on the discussions about Product Memory, check the following link – Why “Product Memory” Triggered Such a Strong Reaction (and What It Reveals About PLM’s Missing Layer).

This is exactly what AI agents need to be useful in a product development context. Agents do not just consume data passively. They reason about it. They ask whether a past decision creates a constraint on a current one. They look for patterns across product generations. They connect a field failure back to a design choice made three years ago. None of that reasoning is possible without the kind of connected, contextual knowledge that Product Memory represents.

Without this layer, opening PLM data to agents creates noise. With it, that same openness becomes genuine leverage, the kind that compounds over time as more decisions are captured, more connections are made, and the product knowledge base becomes increasingly rich and useful.

The Future of PLM: From Systems of Record to Product Memory Platforms

This brings me back to the fundamental challenge that PLM faces as an industry.

For decades, PLM positioned itself as the system of record (SOR). The place where product data is official, controlled, and governed. This framing served the industry well when the main challenge was ensuring consistency and preventing the chaos of scattered, uncontrolled product information.

But the center of gravity is shifting. The value of a PLM platform will increasingly be determined not by how reliably it stores data, but by how effectively that data can be used, by engineers, by downstream systems, and by AI agents operating across the entire product lifecycle.

This requires rethinking what PLM is for. Not just open data, but structured data where relationships are explicit. Not just revision control, but traceable reasoning about why revisions happened. Not just integration endpoints, but an architecture that makes product knowledge genuinely legible to the tools that need to work with it. The PLM system of the future is not a vault. It is a platform for product intelligence.

That is a significant transformation, and it will not happen automatically. It requires deliberate choices by both vendors and the organizations that use their products.

Conclusion: The OpenClaw Moment for PLM

OpenClaw is not the story. It is a strong signal about the change.

It points to a shift in what users, and now agents, expect from software. The expectation is no longer that you will navigate between systems and reconcile information manually. The expectation is that intelligent tools will be able to work across everything, accessing and reasoning about information wherever it lives, without constant human mediation.

PLM is now approaching its version of that moment. The question is not whether the industry will respond. It is whether it will respond quickly enough to remain at the center of where product knowledge lives, rather than becoming one of many data sources that agents query while doing their real work elsewhere.

The organizations and platforms that take this seriously now, investing in making product data not just accessible but genuinely understandable, will find themselves far better positioned for the next decade of engineering work. The ones that treat this as a distant concern will wake up to find that the agents have already rebuilt product understanding outside their walls, in spreadsheets and external pipelines and whatever comes after OpenClaw, leaving the PLM system as an expensive filing cabinet for data nobody queries directly anymore. That is not a future any of us in this industry should be comfortable with.

Just my thoughts… 

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights. 

Recent Posts

Also on BeyondPLM

4 6
12 February, 2015

There is a growing number of discussions related to  “platformization” in PLM. Few weeks ago I had a chance to...

2 November, 2009

My new blog post on 3D PERSPECTIVES. I hope you’ll enjoy the historical perspective on 3D software usage from Dassault...

15 August, 2014

Product Data Management (PDM) isn’t a new domain. The first PDM systems were invented 20-30 years ago with a simple...

30 October, 2011

I was reading Oracle journal early today. Navigate your browser to read a short article – Which Cloud Service Provider...

24 October, 2013

Manufacturing is going global. This is not about the future. This is a reality of all manufacturing companies today. So,...

13 August, 2016

Product Innovation Platform is a term that coined for the last 2-3 years to describe a new way to design,...

10 May, 2011

After talking very positively about PLM and Cloud, it is a time to think about negative sides of the cloud...

17 January, 2020

Unless you live under the rock for the last few years, you’ve heard about DevOps. So, for starters, this is...

9 February, 2015

Dear friends! I wanted to share some personal news with you. I decided to make a next change in my...

Blogroll

To the top