My attention last week came across the news about Anthropic donating the Model Context Protocol (MCP) to an open foundation, and it made me pause, not because of the AI angle, but because of a much older problem it quietly touches – integration patterns.
Coincidentally Michael (Fino) Finocchiaro published his picture of a futuristic PLM-MCP graphic on LinkedIn. Check it here.
That picture made me pause second time. You can see the “entire thing” is surrounded by MCP layer. What does a protocol designed for AI tools have to do with PLM integrations?
At first glance, nothing. But the more I thought about it, the more familiar the pattern felt. I’ve seen this movie before – just with different technologies and different promises.
In my article today, I decided to explore a potential future role of MCP and its ability to change the PLM integration capabilities
The Same PLM Integration Story, Different Chapter
Over the years, I’ve watched PLM interoperability repeat the same cycle. New APIs appear with fanfare. Integration frameworks are announced with bold promises about “connected digital threads.” And yet, when you actually talk to engineering teams trying to move data between CAD, PLM, ERP, and manufacturing systems, the reality looks remarkably similar to what it did a decade ago: fragile point-to-point connections, custom scripts held together with duct tape and prayers, and an exhausting amount of manual work to keep everything synchronized.
A typical example looks deceptively simple on paper. A CAD assembly is released, an EBOM is pushed into PLM, items are synchronized into ERP, and manufacturing consumes an MBOM downstream. The integration “works.” Data flows. No errors are thrown. But six months later, data model evolves, new PLM software version is deployed, new BOM management rules are introduced in ERP and suddenly the same integration produces a subtly wrong answer. Nothing breaks loudly, but production gets the wrong results. I’ve seen this pattern repeat more times than I care to count. Integration team got a trigger, tests will be run, “patching team” will arrive to fix the “PLM-ERP integration plumbing”.
That’s why the MCP news caught my attention? It is not because it magically solves interoperability, but because it reframes the problem in a way that feels strangely familiar to anyone who has lived through multiple PLM integration cycles.
Anthropic’s decision to donate MCP to an open, Linux-Foundation-style governance model is an important signal. It says: this isn’t a feature we’re launching; this is infrastructure we’re building. And when something positions itself as infrastructure rather than “yet another integration pipe”, it’s worth asking what it might mean beyond its original use case for PLM integrations.
This article isn’t about jumping on an AI trend or predicting the future of autonomous engineering agents. It’s about asking a more uncomfortable question that’s been nagging at me for years: what if PLM interoperability has never really been an API problem, but a conversation problem?
What MCP Actually Changes (And Why Developers Care)
At a technical level, MCP defines a standard way for AI clients to connect to external systems through tools. A tool has a name, a schema, and a predictable input/output contract. MCP clients can discover these tools, understand what they do, and invoke them in a consistent way. That description sounds almost boring. But the implications are larger than they first appear.
The real value of MCP for AI developers isn’t that it replaces APIs ( it doesn’t) , but that it standardizes the interaction pattern between intelligent clients and complex systems. Instead of every application inventing its own way to expose capabilities, MCP proposes a shared language for describing what can be done and how it can be called.
The donation to an open foundation matters because protocols only succeed when they outgrow their original sponsor. Once a protocol becomes neutral infrastructure, developers can build on it without worrying about roadmap surprises or waking up one day to find the company has pivoted to something else entirely. We’ve seen this movie before with HTTP, OAuth, and container runtimes. The ones that survived became boring. The ones that stayed exciting usually died.
In other words, MCP is positioning itself as plumbing. And plumbing, when done right, is supposed to be invisible.
Why This Matters to PLM People (Even If You Don’t Care About AI)
Here’s where it gets interesting for those of us who’ve spent years in the PLM trenches.
For years, PLM vendors were very protective about the data and API access. On paper, each PLM software provides APIs. But when the rubber hits the road, there are many “details”, which will convince you that using a “PLM suite” (the one on the left side of Fino’s picture above) is better.
PLM interoperability has always suffered from a structural problem: every system talks, but no two systems speak the same language in the same way. CAD systems expose geometry and assemblies. PLM systems expose revisions, lifecycles, and product structures. ERP systems expose items, costs, and supply constraints. Manufacturing systems expose routings, operations, and execution states. DBOM, EBOM, MBOM, SBOM, PBOM… it all gets very complex.
APIs give you access to all of this, sure. But access isn’t the same as coherence. Each integration becomes a custom negotiation about meaning about what is a part, what does released actually mean, how alternates behave, how effectivity applies across plants or serial ranges.
The real pain isn’t the initial integration, but what happens later. Over time, these integrations become fragile not because APIs break, but because assumptions drift. Someone upgrades a system. Someone adds a lifecycle state that didn’t exist when the integration was written. Someone introduces a new configuration rule the mapping logic never anticipated. Suddenly the integration still “works,” but it produces the wrong answer. I’ve watched teams spend months chasing these issues because nothing technically failed.
That’s why I find MCP interesting. It is not a PLM solution ready to deploy tomorrow, but as a different abstraction layer worth understanding and potentially changing the integration story.
MCP Reframes Integration as a Conversation, Not a Pipe
Traditional integrations think in terms of pipes. System A pushes data to system B through batch jobs, message queues, or REST endpoints. You build the plumbing, monitor the flow, and hope nothing clogs.
MCP thinks in terms of conversations. A client asks, “What can you do?” The system responds by describing its capabilities – its tools. The client invokes a specific capability with a defined contract, and the system responds in a predictable way.
This is where a subtle but important distinction appears. Traditional APIs expose data semantics. Think about create, read, update, delete objects (typical CRUDE implemented using REST API in a modern version). The GraphQL made it structured, but fundamentally the same.
MCP-style tools expose work semantics. Think about compare, validate, propose, resolve, simulate. PLM has always been about processes, but we’ve spent decades exposing it primarily as data models.
If you translate this into PLM terms, something “clicks”. Instead of hundreds of generic endpoints requiring deep internal knowledge, a PLM system could expose tools representing actual engineering process: retrieving a BOM in a specific context, comparing two states and explaining the delta, proposing a change without committing it, attaching tasks and comments with traceability, resolving alternates based on policy.
Each of these is not just a data fetch. It’s a meaningful operation that encapsulates business logic, validation rules, and context.
This matters because engineering work is increasingly conversational. Engineers think in questions, not endpoints. APIs answer how to get data. MCP-style tools start to answer what the system can actually help accomplish. Check my earlier article – Future PLM integrations from REST API to MCP Servers and agentic workflows
Why MCP Fits the N×M Integration Reality of PLM
PLM ecosystems are inherently N×M problems: many systems, many consumers, many workflows. Every new consumer—another dashboard, another supplier portal, another AI experiment—forces a choice between rebuilding integrations or duplicating logic.
Neither option scales. The cost isn’t the first integration. It’s the tenth, years later, when nobody remembers why certain decisions were made.
MCP explicitly targets this pattern. A system exposes tools once. Any compliant client can discover and invoke them. You’re no longer building point-to-point integrations; you’re publishing capabilities.
This is exactly where PLM integrations collapse today—not because the technology is missing, but because the economics don’t work.
Why Now?
It’s not accidental that MCP is emerging now. Agentic applications need structured, deterministic ways to call systems. UI-driven automation doesn’t scale. Screen scraping is fragile. Free-form prompts are dangerous. The industry is converging on “tool calling” as a control layer between intelligence and execution.
PLM happens to be a domain full of valuable, structured operations, but without a standardized way to expose them. MCP didn’t come from PLM, but it arrives at exactly the moment PLM needs a better abstraction than “another API.”
Where MCP Helps—and Where It Doesn’t
MCP standardizes interaction patterns, not just transport. It doesn’t align data models, but it does reduce friction around how capabilities are discovered and invoked.
It’s designed for agentic workflows, but that also makes it suitable for any scenario where intent matters more than raw data transfer.
It encourages composability. Workflows become compositions of capabilities rather than brittle sequences of calls.
It also makes governance more explicit. Tools can be permissioned, audited, and scoped in ways ad-hoc integrations rarely are.
But MCP does not solve semantic alignment. It does not eliminate the complexity of long-running change processes with partial states and branching logic. PLM change is not a single call—it’s a journey over time.
It also raises real control-plane questions. Safe adoption requires distinguishing read-only tools from “propose” tools, and those from “apply” tools, with explicit approval gates and audit trails. MCP makes this visible—but it doesn’t enforce it for you.
And MCP doesn’t fix weak foundations. It sits on top of existing APIs and models. If those are messy, MCP will faithfully expose the mess.
Why This Connects to PLM’s Unfinished Business
For decades, PLM tried to solve interoperability through centralization: one system, one model, one truth. So called SSOT (Single Source of Truth) – connects everything to the same PLM suite. That worked in closed worlds. It doesn’t work in distributed ecosystems.
MCP represents a coordination philosophy instead: systems remain independent, but agree on how they describe and expose what they can do.
PLM already knows how to define meaningful operations—release, revise, compare, substitute, effectivize. What it hasn’t done well is externalize those operations as reusable capabilities outside its own UI.
MCP doesn’t force PLM to change its logic. It challenges PLM to expose it.
A Practical Thought Experiment
Imagine a near future where PLM exposes tools for exploring BOMs and proposing changes, ERP exposes tools for cost and availability simulation, and manufacturing exposes tools for routing and capacity checks. All these tools are connected to a collaborative ECO workspace with agentic access.
An engineer asks a question. A workflow (or an assistant) coordinates calls across systems. Humans remain in control and accessing the BOM sandbox allowing simultaneous editing. Decisions become faster, but also more transparent. Nothing here requires PLM to become an AI platform. It requires PLM to become callable.
What is my conclusion?
Here is an uncomfortable truth. MCP won’t fix PLM interoperability by itself. But it exposes something PLM has avoided for years: interoperability fails not because we lack APIs, but because we lack shared interaction contracts and a collaborative workspace that connects both processes and data. APIs answer how. MCP-style tools start to answer what.
If PLM wants to remain relevant in an AI-shaped engineering world, the path forward isn’t bolting AI onto approval workflows. It’s rethinking how PLM capabilities are exposed, composed, and governed beyond the UI in a collaborative workspace connected using conversational agentic workflows without API restrictions.
MCP is on a path to win the standards race. But the idea behind it that systems should publish meaningful, discoverable operations instead of raw pipes is no longer optional.
Because every few years, PLM announces another integration breakthrough. And every few years, teams rebuild the same fragile connections with new technology.
Maybe the problem was never the pipe. Maybe it was always the seamless and semantic connection and collaborative workspace?
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.


