A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

Building PLM Agents: Why Everyone Is Announcing AI and Why Almost Everyone Is Missing the Point

Building PLM Agents: Why Everyone Is Announcing AI and Why Almost Everyone Is Missing the Point
Oleg
Oleg
22 November, 2025 | 12 min for reading

If you follow the PLM market long enough, you begin to recognize patterns in how vendors respond to technological shifts. Cloud? They painted existing architectures blue and called them SaaS. Digital thread? They rebranded integrations. IoT? They wrapped sensors around traditional workflows. Today, the new banner is AI — and, unsurprisingly, every major PLM vendor now claims to have an “AI strategy” centered around copilots, assistants, or “virtual companions.”

On the surface, it looks like a generational shift. Dassault Systèmes speaks about a “Generative Economy” powered by virtual twins and AI-driven experiences. Siemens is embedding copilots into Teamcenter and NX to help users explore BOMs, extract requirements, and navigate documents. PTC is offering what might be the most explicit “agentic AI” framework across PLM, ALM, CAD, and service. Aras is riding the Microsoft wave with RAG-driven conversational access to Innovator SaaS.

But after looking closely at the actual vendor announcements — and, more importantly, diving deep into three of the most important AI-agent publications released this year by Google, Vercel, and Anthropic — I’ve come to a simple but uncomfortable conclusion:

What PLM vendors are doing with AI today focuses on how to help users with existing PLM products. But none of it will deliver the future of agentic PLM. That future requires an architecture they simply don’t have.

This article is an attempt to explain why. It’s a synthesis of what the PLM vendors announced, what the AI industry is learning, and what decades of PLM architectural decisions have made almost impossible — unless we rethink the foundation.

In upcoming Beyond PLM articles, I’ll go deeper into the implications for workflows — which I believe will become the true center of PLM. But first, let’s examine where we stand.

The PLM “Agent” Landscape: Four Vendors, Four Philosophies

Let’s look at the PLM vendor landscape as of late 2025 — not the marketing story, but the actual substance behind the announcements.

Dassault Systèmes is presenting one of the most visionary narratives around AI. Their recent media announcement talks about a future powered by “Global IP Lifecycle Management,” “Generative Experiences,” and “AI-powered virtual worlds.” The messaging is expansive, philosophical, and oriented toward a long-term vision where virtual twins and AI reshape how industries operate. Yet beneath this sweeping narrative are very concrete productized elements such as AURA, an AI assistant for 3DEXPERIENCE and SOLIDWORKS that can generate 3D geometry from images and surface compliance requirements. It doesn’t do it now (future – image to sketch and text to CAD, but now it is more context aware help and information retrieval) and Dassault’s approach is a mix of practical assistive capabilities embedded in its applications and an ambitious, almost utopian framing of AI as part of a new “Generative Economy.”

Siemens, in contrast, is taking one of the most grounded and implementation-focused approaches. Their Teamcenter AI Copilot is a practical productivity companion designed to operate inside the environments engineers use every day. The foundation of it – to transform your Teamcenter-managed files into intelligent knowledge stores. It focuses on document intelligence, BOM navigation, requirement extraction, and natural-language exploration of PLM content. Siemens emphasizes traceability, security, and deployment flexibility — GPT-4o via Azure OpenAI, Claude via AWS Bedrock, and on-premise Llama. Rather than presenting a transformative vision of agent ecosystems, Siemens is optimizing the day-to-day flow of engineering information through targeted copilots tightly integrated with Teamcenter and NX.

PTC is the only vendor that has avoided the ambiguity and openly embraced the word “agent.” Their “Advise → Assist → Automate” maturity ladder, described here, is explicitly about building autonomous systems capable of completing multi-step workflows across the digital thread. They are positioning Windchill AI, Codebeamer AI, ServiceMax AI, and even the emerging Arena AI Assistant as parts of a coherent agentic strategy. While their current tools, described in the article – How AI Agents Are Accelerating Digital Transformation in Industry – like the Windchill Document Vault AI Agent able to ask about information stored in the documents and databases, are still rooted in RAG and workflow assistance, the strategic framing is unmistakably aimed at cross-system orchestration and lifecycle-wide automation.

Aras, meanwhile, is leaning into its long-standing identity as the flexible, low-code PLM platform. Their AI-Assisted Search and AI-Powered Intelligent Assistant for Innovator SaaS are built directly on Microsoft Azure OpenAI and Copilot Studio. The company focuses on providing powerful conversational access to structured and unstructured data, with the InnovatorEdge low-code API fabric forming the backbone for future extensions. It’s a pragmatic, developer-friendly approach where AI becomes another service in the Aras platform, but one that remains primarily advisory rather than autonomous.

Put all of this together and you can see one thing that is common between all these strategies: all four vendors are adding AI inside their products, but none of them are rethinking PLM architecture for an agent-native future. They are embedding assistants inside old systems rather than redesigning systems around the needs of agents.

To understand why this matters, we have to look outside PLM — at what the AI world has learned during the rise of agents in 2024–2025.

Lessons from Google, Vercel, and Anthropic: Agents Are Systems, Not Feature

Three publications released recently form the most important triangulation in the agent ecosystem:

Each source offers a different lens on the future of AI agents, but together they explain why PLM vendors — despite their announcements — are still missing the core architectural picture.

Google: The Real Product Is Orchestration

Google’s whitepaper describes agents as continuous loops: they think, act, observe, update memory, and repeat. But the most important part of their vision is not the agent loop itself; it is the orchestration layer that surrounds it. Google argues that the future belongs to systems capable of routing context, managing tool access, enforcing permissions, coordinating multiple agents, setting budgets, logging actions, and escalating decisions to humans.

This orchestration is not an optional feature. It is the foundation. It is what makes agents safe, scalable, and useful. Without orchestration, agents become unpredictable. Without orchestration, they cannot collaborate. Without orchestration, they cannot integrate with enterprise systems.

Anyone who spent years working on PLM architecture can immediately recognize the parallel: PLM tried to do many of these things — workflow routing, permissions, change management — but never built an actual orchestration environment capable of managing reasoning, memory, multi-agent execution, or tool-level permissions.

Vercel: Start With Toil, Not Grand Visions

Vercel’s perspective could not be more different. Instead of thinking about a future agent economy, they focus on where agents can deliver value today: automating repetitive, verifiable, deterministic tasks that currently drain time and resources. Their examples are small but powerful: triaging tickets, extracting data from documents, performing routine checks.

Translating this into PLM terms reveals how much value lies waiting to be unlocked. Most engineering and manufacturing organizations are drowning in repetitive tasks: mapping metadata, validating revisions, preparing ECOs, reconciling BOM discrepancies, checking compliance markers, assembling supplier data, and generating cost rollups. These tasks are perfect candidates for automation, yet PLM systems rarely treat them as first-class capabilities. Vercel’s message is clear: if you want trust and adoption, start by eliminating toil.

Anthropic: The Model Is Never Enough — Security Lives in the Orchestration Layer

Anthropic’s investigation into the Claude code hack delivers the most sobering message of all. The model cannot be trusted to enforce boundaries, permissions, or execution limits. Even well-behaved models can be manipulated in clever ways. Therefore, security must come from the orchestration layer. Agents must have identities. They must have explicit permissions, budgets, limited tool access, and full auditability. Every action must be logged and traceable.

This is the biggest difference between “AI as a feature inside PLM” and “PLM as an agentic system.” The former is fundamentally insecure. The latter is architected for safety.

And here lies the point PLM vendors have not yet internalized: embedding AI inside a PLM system does not make the PLM architecture ready for agents. It only adds a new UI surface.

The Architectural Fork: Where Should PLM Agents Live?

Once you understand the Google–Vercel–Anthropic triangle, the central architectural question becomes obvious (at least to me): Where should agents actually live in a PLM environment?

All PLM vendors have adopted the simplest answer. Agents live inside the PLM system. They are copilots next to data structures, assistants inside CAD tools, or chat interfaces layered on top of existing schemas and documents. It’s the path of least resistance because it fits the existing product model.

But this path is also a potentially the largest architectural mistake the industry could make, and it boils down to three fundamental problems.

First, PLM databases designed 25+ years ago were never designed for agent reasoning. They were built for structured records, relational consistency, transactions, and long-term data retention. They were not built for dynamic context windows, reasoning traces, multi-agent memory, or task decomposition. Trying to run an agent ecosystem inside a PLM database is like trying to simulate a swarm of autonomous drones inside an Excel spreadsheet.

Second, engineering and manufacturing workflows do not live inside PLM systems. They touch CAD, PDM, ERP, MES, simulation tools, procurement platforms, supplier portals, and field service systems. An agent that resides inside a PLM system will be stuck in the same problem of “single source of truth” PLM was battling for the last 20+ years and has no native way to move across these boundaries. It becomes trapped in the very silo the PLM vendor wants to claim it can coordinate.

Third, embedding agents inside PLM reinforces the outdated idea that PLM is the center of the digital thread (opposite to some novel ideas of ‘single source of change’). This is the same conceptual mistake the industry has made for twenty years: assuming that the enterprise revolves around a single structured database and data model. But engineering reality is distributed. Workflows cross systems, teams, and companies. Agents must mirror that reality — they must operate above the systems, not inside them.

The Core Mistake: Grounding Agents in PLM Databases

The urge to embed AI directly in PLM systems is understandable. PLM vendors want to show progress quickly. They have data. They have users. They have interfaces. And they want to capitalize on this momentum. But grounding agents inside PLM databases is conceptually flawed.

The first reason is simple: no single PLM system contains enough context to make lifecycle decisions. Engineering decisions depend on design metadata, cost structures, supplier constraints, manufacturing capabilities, regulatory requirements, and service feedback. This data lives across many systems, not inside any one of them. A PLM-grounded agent can never be contextually aware enough to reason well.

The second reason is that agents need a knowledge graph, not a relational database. Engineering information is deeply relational and constantly evolving — assemblies link to configurations, which link to requirements, which connect to simulations and manufacturing processes. This network of relationships is precisely the structure agents need for reasoning. But traditional PLM schemas are rigid, hierarchical, and optimized for transactions, not inference.

The third reason is that the digital thread is inherently distributed. No vendor owns it. No vendor can fully model it. And therefore no vendor can reasonably argue that their PLM schema is the canonical grounding for agents.

Finally, lifecycle workflows are cross-system by definition. A BOM change might require cost updates in ERP, capability checks in MES, supplier validation in procurement tools, and testing impact in ALM. Agents must be able to move across these environments, orchestrate distributed tasks, and interact with tools that live outside PLM.

Embedding agents inside PLM traps them before they even start.

What PLM Actually Needs: Product Memory and Multi-Agent Orchestration

If you take the lessons from Google, Vercel, and Anthropic seriously, the architecture needed for agentic PLM becomes clear. PLM needs a product memory layer that spans systems, and an orchestration layer that governs agents, tools, workflows, and safety.

A product memory graph is the first requirement. It must reflect the full lifecycle: items, BOM variants, requirement hierarchies, simulation results, manufacturing processes, alternatives, cost histories, supplier parts, quality loops, field service events — and the relationships among all of them. This memory cannot be limited to a single PLM system, because it must reflect the full context of engineering decisions. It has to be system-agnostic, vendor-agnostic, and lifecycle-spanning.

The second requirement is a multi-agent orchestration environment — a control plane capable of coordinating agent behavior, enforcing permissions, mediating tool access, tracking reasoning steps, managing budgets, logging actions, and resolving conflicts. It must serve as the operational backbone for agent collaboration and workflow execution. Without this layer, agents remain isolated helpers. With it, they become capable of executing lifecycle workflows safely and reliably.

This architecture is very different from embedding copilots inside applications. It is also the architecture none of the PLM vendors currently have. Their AI assistants are valuable additions, but they do not represent the foundation needed for agentic PLM.

What is my conclusion?

The future of PLM agents is not Inside PLM software systems.

The last two years of AI announcements in PLM mark an important transition. Vendors are recognizing that engineers need help navigating complexity, reducing repetitive work, and making faster decisions. Copilots and assistants are useful tools, and they will continue to improve PLM user experiences.

But copilots are not the future of PLM AI. The future belongs to systems of agents orchestrating workflows across the digital thread — systems capable of reasoning over a shared product memory, enforcing safety through orchestration, and collaborating with one another across engineering, manufacturing, and supply-chain ecosystems. Such systems cannot live inside PLM databases. They must live above them.

This is not AI added to PLM. It is PLM rebuilt on top of AI-driven workflow execution. And if the industry does not shift in this direction, the most transformative opportunities of agentic technology will be realized outside the PLM platforms that claim to manage the lifecycle.

In the next articles, I’ll explore what this workflow-centric world actually looks like — and how multi-agent coordination will reshape digital thread execution in ways that traditional PLM never managed.

Stay tuned. The future of PLM will not be a copilot. It will be a system of agents that finally makes lifecycle intelligence real.

Best, Oleg

Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. Interested in OpenBOM AI Beta? Check with me about what is the future of Agentic Engineering Workflows.

With extensive experience in federated CAD-PDM and PLM architecture, I advocate for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased.

Recent Posts

Also on BeyondPLM

4 6
7 July, 2013

One of the looongest US weekends I remember  is going to end. This is a time to get back from...

7 January, 2013

What do you think about “social PLM” trend? I don’t see many cheerleaders of social PLM nowadays. The excitement and...

9 August, 2010

I want to take another round of thinking about Enterprise and Social Software. My last post related to that was...

26 November, 2022

Security is important. Period. However, the topic of data security and data governance was bringing a lot of controversy in...

28 September, 2016

Engineers don’t like PDM (product data management) and consider it as an unnecessarily evil. At the same time, complexity of...

1 April, 2019

There are endless debates about the scope and definition of PLM in the industry. One day I thought to collect...

11 September, 2018

Cloud adoption is coming to the edge. Earlier today, I’ve learned few interesting facts about cloud adoption from Forbes article...

19 August, 2021

Bad habits die hard. When you become accustomed to something, it is hard to break out. And the longer you...

27 December, 2017

Manufacturing software is changing. New technologies are coming to engineering and manufacturing. Cloud and online services is one of them....

Blogroll

To the top