How to evaluate your current PDM/PLM setup against the structural requirements of agentic AI
A few weeks ago I wrote about why PLM must be re-architected for AI agents — arguing that the fundamental problem is not model capability but operational architecture. The intelligence in a production AI system lives in session persistence, permission pipelines, context management, and audit history. Not in the LLM call itself.
Since then, I have captured a few things that happened outside the PLM world that make the point sharper and more urgent for companies re-thinking their PLM future. Both deserve attention from anyone running engineering and manufacturing software today.
My first data point came from the learning I did about issue trackers. Karri Saarinen, the CEO of Linear, declared issue tracking dead in March. His argument was reasonable: when AI agents can interpret context directly, the human translation layer becomes friction. Then OpenAI open-sourced Symphony, and Linear became the literal control plane for the most ambitious autonomous coding system ever shipped. Some internal teams reportedly saw a 500% increase in landed pull requests. The thing Saarinen had eulogized was now the substrate that made all of it work.
He was right about the user experience and wrong about the infrastructure. The state machine survived. The assignee field survived. The audit history survived. They are now, quietly, among the most strategic infrastructure assets in enterprise software.
The second thing happened at Salesforce’s TDX developer conference. The company announced Headless 360 — a move that positioned Salesforce as infrastructure under the agent economy rather than a product competing inside it. Every capability across the entire platform, every object, every workflow, every piece of business logic, is now exposed as an API, an MCP tool, or a CLI command. Parker Harris, the Salesforce co-founder, asked the question out loud: “Why should you ever log into Salesforce again?” The browser UI is now optional. Every agent on the market can reach into Salesforce with full live data access and inherited enterprise permissions.
What connects these few events and observations is the same insight: the tools that survive the agentic transition are not the ones that added chat interfaces or rebranded their search as AI-powered. They are the ones with the right structural properties underneath — properties that were built for human coordination but turn out to be exactly what agents need.
The question for PLM architects and implementers is whether PLM systems you built have those properties. Most of them do not.
What Enterprise Software Gets Right That PLM Gets Wrong: The Agent Infrastructure Test
Nate Jones’ analysis of issue trackers makes an observation that cuts directly into PLM. The five structural properties that make a tool agent-ready were not designed for AI. They were designed to solve human coordination problems: limited memory, time-zone friction, handoff ambiguity, accountability gaps. Bugzilla encoded them in 1998 because a few hundred Mozilla developers working asynchronously through dial-up needed a way to track defects without losing context.
Those properties are: persistent state outside any single person’s memory, a state machine with well-defined transitions, unambiguous ownership, defined verbs with clear semantics, and queryable audit history.
What makes the issue tracker story interesting is not that those properties are sophisticated. They are not. What makes it interesting is that most enterprise software does not have all five. Email has history but no state machine and no real ownership field. Chat has proximity to where humans congregate but its action verbs are conversational rather than structural — there is no “assign” or “resolve” in a Slack thread, only “reply.” Documentation has versioning but weak verbs and fuzzy ownership.
Issue trackers have all five, which is why Symphony could route autonomous agents through Linear tickets without reinventing the coordination substrate. The tracker already was the coordination substrate.
Now apply the same diagnostic to PDM and PLM.
Five Questions to Evaluate Your PDM/PLM System for AI Agent Readiness
I transformed these five questions and how they can apply to engineering and the PDM/PLM environment, though the stakes are different. A wrong recommendation from an agent operating on ambiguous CRM data might cost a sales opportunity. A wrong recommendation from an agent operating on ambiguous BOM data might cost a product recall or even people’s lives, as happened in the Boeing 737 MAX or GM ignition switch failure.
Here are the five questions. For each one, I will describe what a positive answer looks like in a PDM/PLM context, and what the gap looks like if the answer is negative.
Does it have records or just content?
A system with records has structured, addressable units of product knowledge. A part is a record. A BOM is a record. A change order is a record. Each has a unique identifier, a schema, a lifecycle state, and a set of relationships that the system maintains and enforces. A system with content has files. Documents. Attachments. Things stored in folders.
Most traditional PDM systems nominally have records, but the real test is whether the relationships between records are maintained by the system or inferred by humans. If your BOM exists as a spreadsheet inside a vault exported from the CAD system and managed by SharePoint where you keep it for contractors’ access, you have content masquerading as a record. If your engineering change you manage in JIRA references a document stored in a separate CAD file vault with no enforced relationship to the affected parts, the system has a rich content, but not a record. An agent operating on content has to reconstruct the record structure from context every time. That is fragile. Every inference is an opportunity for error.
Does it have a state machine or just labels?
Every PDM/PLM system has lifecycle states. The question is whether those states are enforced transitions in a graph or decorative labels attached to objects. In a real state machine, a part in Released state cannot move directly to Obsolete without passing through the defined intermediate states and meeting the defined transition conditions. The system enforces it. In a label system, someone changes the field and the system accepts it.
For agents, this distinction is critical. An agent needs to know what actions are legal given the current state of an object. If the state is just a label, the agent has to infer the rules from context, documentation, and whatever the system administrator configured in a workflow engine three years ago and may or may not have kept current. If the state is a position in an enforced graph, the agent can query the legal next transitions and act accordingly. This is not a subtle difference. It is the difference between a system that can participate in machine-coordinated work and one that requires a human to interpret every step.
Is ownership a field or an implication?
In a well-structured PDM system, every object has a clear owner at every stage of its lifecycle. The change order has a responsible engineer. The released part has a configuration manager. The open action item has an assignee. Ownership is a first-class property of the data model, not something you infer from who last checked out the file or who is on the email thread where you sent a BOM spreadsheet to a contractor for approvals.
Most legacy PDM systems have some notion of ownership but it is inconsistent, often implicit, and rarely maintained in real time. When an engineer leaves the team, their objects do not automatically reassign. When a change moves through approval, the responsible party is tracked in a workflow object, not as a persistent field on the change object itself. An agent asked “who owns this?” and had to go looking through workflow history, email records, and organizational charts to find an answer that should have been a single field lookup.
Are the verbs structural or conversational?
This is the question that most clearly separates agent-ready systems from systems that merely have APIs. Structural verbs have defined preconditions, a clear effect on system state, and are auditable. “Submit for review” is structural if the system enforces that the change object must be in Draft state before submission, that all required fields are populated, that the designated reviewers receive a notification, and that the state transitions to InReview as a result. Conversational verbs are any operation where the effect on system state is unclear, unenforced, or depends on human interpretation.
Most PLM workflow engines have structural verbs for the main approval spine — the formal submit-approve-release path — and conversational verbs everywhere else. PDM/PLM systems are still, fundamentally, data and file editors. Users perform basic CRUD operations to record and edit documents.
What is missed in most PDM and PLM systems is structured verb systems defining all operations and conditions required to perform specific actions. The most formal operations are check-in/out, release if you think about pure PDM scope. The downstream processes BOM transformations, compare, change review and approvals are super unstructured.
This is exactly where agents become interesting. Upstream of formal approval — in the exploratory, coordination, and decision-making space, most PLM systems simply run out of structural verbs. That space is full of ambiguity, context, negotiation, and judgment.
And this is where conversational agents can help: not by replacing formal workflows, but by operating in the messy area before the workflow becomes formal.
Is the history queryable or just visible?
A visible history is a history of revisions you can open in a user interface and read/view. A queryable history is one where you can ask: what changed on this part between revision A and revision B, who made each change, what was the state of the part at each point, and what other objects were affected by changes in the same time window?
For human engineers, a visible history is often sufficient. They can read it, apply judgment, and reach a conclusion. For example, an engineer can see the previous design and make a conclusion about the change compared to the current work. For agents, a history that cannot be queried systematically is nearly useless at scale. The agent has to process audit logs as narrative text rather than as structured data. That works for simple cases and fails for complex ones. And in PLM, the cases that matter: understanding why a design evolved the way it did, tracing the provenance of a specification, reconstructing the context of a decision made two years ago are almost always complex.
Why Most PDM and PLM Systems Fail the Agent Readiness Diagnostic
Honest answers to these five questions put most commercial PDM and PLM implementations in a difficult position.
The record structure is often partially present and inconsistently maintained. Parts and BOMs have records. Documents and specifications often do not, or their records exist without enforced relationships to the parts and changes they affect. The system has the concept of flexible records, but the data practice around it is loose.
The state machine is typically present on the formal check-in/out and approval path, but absent everywhere else. Released revision management is well-enforced in most PDM systems. Change management is partially enforced. The exploratory and coordination work that happens before a change is formally submitted, the work where agents could add the most value happens outside the state machine entirely – check ECO definition – it mostly defines who needs to approve it and what is the approval order.
Ownership is inconsistent. Formal assignments in workflow steps are tracked. Informal ownership of objects in active development is often implicit or absent.
Structural verbs exist for the main approval workflow and are sparse everywhere else. Most PDM APIs are CRUD interfaces that mirror the human UI rather than expressive action models that encode the semantics of engineering work.
History is visible and queryable to varying degrees depending on the system, but the queryability is usually optimized for compliance reporting rather than for agent reasoning. You can prove that a part was released on a given date. You cannot easily ask what the full context of that release decision was, or trace how a design requirement propagated through to a manufacturing BOM.
What Salesforce Headless 360 Reveals About the PLM Architecture and Openness Gap
The Salesforce Headless 360 announcement is not primarily a CRM story. It is a demonstration of what enterprise infrastructure looks like when a vendor decides that agents are first-class consumers of their platform.
The move is structural. Salesforce did not add a chat interface to their existing UI and call it intelligent queries. They exposed every object, every workflow, every piece of business logic as an API, an MCP tool, and a CLI command. They separated the Experience Layer — how output is rendered — from the underlying work logic, so that the same agent action can surface in Slack, Teams, mobile, or a third-party AI client from the same underlying behavior. They open-sourced Agent Script, the language for defining agent behavior. They built an explicit permission inheritance model so agents operate with the same access controls as the human users they represent.
This is the Headless 360 pattern: treat the data model and action model as the primary interface, and let the UI be optional. The browser is one rendering surface among many, not the point of entry that everything else is built around.
PLM vendors are nowhere near this. The primary interface assumption in most commercial PLM platforms is still the human screen. APIs exist, but they are built to mirror the human workflow and significantly focuses on “synchronizing data” with other systems rather than to expose product knowledge and actions in a form agents can consume natively. MCP support is coming slowly, but in most of the cases is focusing on data query. The permission model is built around human role-based access, not machine action-level governance defining what can happen in every possible step.
I can see the gap as an architectural. Salesforce decided that agents are primary consumers and restructured accordingly. PLM vendors need to decide if they will make their data available for agents or they will keep being accessed by engineers managing information and human-primary architecture.
Product Memory: The PLM Data Gap That Breaks AI Agent Workflows
There is one dimension the five-question diagnostic does not fully capture, and it is the one I have been writing about for the past year: Product Memory.
Issue trackers, CRMs, and service desks track work state — what is happening, who owns it, what the current status is. That is enough for agents coordinating execution work. An agent closing tickets, updating opportunities, or routing support requests can operate entirely on current state. The work either happened or it did not.
Engineering and manufacturing work is different. The decisions that produced a product design carry context that cannot be recovered from the outcome record alone. Why was this material chosen over the alternative? What supplier constraints shaped the specification? What assumption about downstream manufacturing processes is baked into this tolerance? What was rejected at the design review, and why?
Most PLM systems capture none of this. They capture the approved state. The reasoning that produced it disappears into email threads, meeting notes, and the memories of engineers who may not be on the next project. For a human engineer coming back to a design after two years, that loss is frustrating. For an AI agent trying to evaluate whether a proposed change is safe, it is disabling.
A state machine without memory is a tool for recording decisions. Product Memory is what makes those decisions navigable for agents trying to build on them. The five structural properties for agent substrate are necessary conditions. Product Memory is what makes them sufficient for the kind of work PLM actually manages. Read my article about Product Memory as a new Enterprise Strategy.

How to Start Evaluating Your PLM Setup for AI Agent Readiness Today
If you are running PDM or PLM today and trying to understand where your real exposure is, the diagnostic gives you a starting point. Score your primary system against the five questions. Be honest about the gap between what the system nominally supports and what your actual data practice looks like. A system that has a state machine but where engineers regularly bypass it through informal status changes is a label system in practice, regardless of what the vendor demo shows.
Then ask the Product Memory question directly: if a new engineer joined your team tomorrow and needed to understand why a key design decision was made eighteen months ago, where would they go to find that context? If the answer is “they would ask someone who was there,” your product knowledge is living in human memory, not in your PLM system. That is the gap agents cannot work around.
The tools that are surviving the agentic transition are the ones that accumulated thirty years of structural discipline around the right properties. Issue trackers accumulated it. Salesforce accumulated it in CRM. The boring infrastructure that nobody loved turned out to be the infrastructure nobody could replace.
PLM accumulated discipline too, but around the wrong things — files, revisions, and prescribed approval sequences. Some systems captured CRUD history for the audit purposes. The structure that agents need is around persistent work state, verbs, ownership as a first-class property, and queryable provenance. Some of that exists in current PLM systems. Most of it does not.
The vendors who recognize this now will not be building PLM with AI added on top. They will be building a different kind of platform: one where the data model is the primary interface, the browser and UI is optional, and agents are first-class participants in the work rather than guests operating at the edges of a system built for someone else.
That transition is the one worth watching for the coming years. In the past, we have been talking about “the next 5 years of PLM”. The speed is changing these days. New “agents” will work around existing PLM stack if it will not transform to support them.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.
