A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

Are We Re-Architecting PLM for AI, or Rethinking the Work It Does?

Are We Re-Architecting PLM for AI, or Rethinking the Work It Does?
Oleg
Oleg
20 December, 2025 | 8 min for reading

A reader wrote to me recently with a question that echoes one I’ve been wrestling with for months:

Do PLM systems need to be re-architected to become AI- or LLM-native? Or will it be enough to bolt AI on top of the platforms we already have?

It’s a reasonable question. In fact, it sounds exactly like the kinds of questions we’ve asked every time a new technology wave arrived in PLM. At first glance, re-architecture feels like the logical answer. New capabilities demand new foundations. If AI is fundamentally different, surely PLM must be rebuilt to accommodate it.

But the more I thought about it, the more I realized that the question itself might be slightly off. Not because AI doesn’t matter, but because architecture may not be where the real problem sits.

The question isn’t which architecture will support AI — it’s whether our conception of work in PLM was ever ready for something as powerful as AI.

That tension immediately reminded me of another moment in PLM history, when the industry questioned whether it was “ready” for cloud and SaaS. That debate, as I explored earlier in Did PLM Miss the SaaS Trend — and Can It Catch the AI Wave? didn’t ultimately turn out to be about servers, deployment models, or security checklists. It was about trust, control, and — once again — how work was understood and organized. The same pattern is emerging today.

Why Architecture Might Be the Wrong Starting Point

One way I’ve found useful to think about this shift is through the lens of modern AI agents.

Strip away the marketing language, and an agent is simply an AI system that can be given a goal, operate within clear boundaries, use specific tools, and return a result that can be reviewed and verified. It’s not a chatbot. It’s not intelligence embedded deep inside a platform. And it’s certainly not autonomous decision-making running loose inside enterprise systems.

It’s closer to hiring a junior assistant for a well-defined task and asking them to come back with something concrete — a comparison, a recommendation, a structured result.

What’s interesting about this model is that the intelligence of the agent matters less than the clarity of the work it’s given. An agent with vague instructions, unlimited access, and no clear outcome will fail spectacularly. A much more modest agent, given a narrowly defined task, clear inputs, and an expected output, can be surprisingly effective.

That observation alone should make us pause before jumping to re-architecting PLM systems. Agents don’t require monolithic redesigns. They require well-described work.

Thinking About AI as Delegated Work, Not Embedded Intelligence

Once you adopt this perspective, a subtle but important shift happens. AI stops looking like a feature that needs to be embedded inside PLM workflows, and starts looking like a participant in the work that happens around PLM.

This distinction matters.

PLM systems were historically designed to manage objects: parts, documents, revisions, configurations. Work happened around those objects, but rarely as something explicitly modeled. Humans navigated workflows, interpreted context, made trade-offs, and applied judgment. The system recorded outcomes — approvals, releases, changes — but not the work itself.

AI agents don’t operate that way. They need work to be described explicitly. They need to know what problem they are solving, what information matters, what constraints apply, and what a “good” result looks like. Without that, they either do nothing useful or produce confident nonsense.

This is where the mismatch between AI agent thinking and traditional PLM becomes impossible to ignore.

How PLM Workflows Hide the Work Being Done

Now contrast the agent model with how work is actually defined inside most PLM systems today.

Tasks are rarely explicit. Instead, they are implied by states, transitions, approvals, and checklists embedded in workflows that were designed for humans to navigate, not for work to be delegated. A change request moves from one status to another. A form gets filled. An approval is granted.

The system records that something happened, but not the reasoning that led there.

What decisions were made? What alternatives were considered? What assumptions were accepted? What trade-offs were rejected? These questions matter deeply in engineering and manufacturing, yet they rarely have a formal place in PLM.

Instead, they live in people’s heads, email threads, meetings, spreadsheets, and side conversations. Even when documentation exists, it’s often detached from the actual moment of decision-making. From the system’s perspective, the work appears almost magical: inputs go in, statuses change, outputs appear.

That model works — as long as humans remain the primary agents of work. It breaks down quickly when we try to delegate even small pieces of that work to AI.

Why Task Re-Engineering Comes Before AI

This is exactly why I argued earlier that PLM needs task re-engineering before it can have meaningful AI: The problem isn’t that PLM systems lack intelligence. It’s that they were never designed around tasks as first-class objects.

Looking at PLM workflows today, it becomes clear that they were designed to move objects forward, not to describe the work being done. Work is inferred from process states rather than described in terms of intent, inputs, constraints, and expected outcomes.

As long as tasks remain implicit — buried inside workflows and approvals — there is nothing coherent for an AI agent to take on. There is no clear unit of delegation, no defined responsibility, and no way to evaluate whether the work was done well or poorly.

Re-engineering tasks is not about automation or efficiency. It’s about making work explicit enough that it can be delegated, reasoned about, and learned from over time.

What an AI-Ready Task Actually Looks Like

An AI-ready task in PLM doesn’t start with a workflow or a status. It starts with intent.

Someone is trying to achieve a specific outcome: evaluate a proposed change, compare design alternatives, assess supplier risk, prepare a recommendation for a decision. What I find missing most often is not data, but a clearly described outcome.

An AI-ready task has a clear scope — what data is relevant and what is not. It has boundaries around what can be modified and what is read-only. It has explicit inputs: a BOM, a set of drawings, supplier information, cost data, historical decisions. And it has an expected output that can be reviewed: a structured comparison, a rationale, a set of options with trade-offs.

In this model, the task is no longer something that happens implicitly as objects move through states. It becomes a first-class unit of work that can be assigned, delegated, reviewed, and — crucially — learned from.

This is where AI begins to fit naturally, without needing to be forced into existing workflows.

From Tasks to Product Memory

Once tasks are described this way, another missing piece becomes visible: memory.

Not file storage. Not version history. But memory of why decisions were made, what alternatives were considered, and what trade-offs were accepted.

Each completed task leaves behind more than an outcome. It leaves behind reasoning. Over time, this accumulation becomes product memory — not as a static repository, but as a living context that both humans and AI agents can draw from.

Without this memory, AI can only react to the present. With it, AI can participate meaningfully in future work, informed by past decisions rather than blind to them.

This is also why product memory cannot be retrofitted as an afterthought. It emerges naturally when tasks are explicit and reasoning is captured as part of doing the work, not as documentation created afterward.

Why This Is Not a PLM Re-Architecture Problem

This is where the original question comes back into focus.

Framing AI as a “PLM re-architecture” problem assumes that intelligence must live inside the system of record, deeply embedded in its logic. But AI agents don’t work that way. They operate alongside systems, taking on bounded tasks, using existing data, and returning results that humans can review and decide on.

The shift required here is less about rebuilding PLM platforms and more about rethinking how work is defined, delegated, and remembered across them. Existing systems can evolve. New capabilities can be layered. But without task clarity, none of it will matter.

AI as a Participant in PLM Work

Seen this way, AI in PLM is not about replacing engineers or automating judgment. It’s about participation.

Agents prepare analyses, surface options, summarize context, and help structure decisions — while humans remain responsible for intent, accountability, and final choices. When tasks are clear and outcomes are explicit, AI becomes useful quickly. When they are not, AI simply amplifies the same ambiguity that already exists.

This is why some AI pilots feel magical and others feel disappointing. The difference is rarely the model. It’s the work.

A Familiar Pattern from the Cloud Transition

The cloud transition offers a useful reminder. PLM didn’t struggle with SaaS because the technology was insufficient. It struggled because trust models, workflows, and assumptions about control were deeply rooted in an earlier way of working.

AI presents a similar challenge. Whether PLM “catches the AI wave” will depend less on how fast platforms adopt new technology and more on how willing organizations are to rethink the work that PLM is meant to support.

What Is My Conclusion

The future of PLM is not smarter systems, but clearer work — work that leaves behind a trail of reasoning and decisions that becomes product memory. In that environment, AI feels less like a feature and more like a capable assistant who knows the product, understands past choices, and can help prepare the next decision.

That capability doesn’t come from re-architecting PLM, but from finally making work explicit enough to remember.

Just my thoughts…

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, an AI-native Collaborative Digital Thread platform providing connecting engineers and manufacturing teams. 

Recent Posts

Also on BeyondPLM

4 6
15 June, 2017

The World has changed because we are all now connected to the thing called Internet. Think about last 10 years. The...

7 August, 2013

Recent debate on Tech4PD brought back one of my favorite topics in PLM – data vs. process. The topic isn’t...

30 May, 2020

Just a few months ago, I wrote the article – Are PLM conferences dead? Sad enough, the recent pandemic turned...

11 April, 2016

Last Saturday I attended a roundtable discussion at COFES 2016. The name of the discussion – The Federated Toolbox was...

24 September, 2009

Dell’s $3.9bn acquisition of Perot Systems few days ago drove me to think about PLM and Services for some time....

5 August, 2019

The predict the future is an ungrateful business. The articles predicting death or disruption are coming from time to time....

10 September, 2010

Yesterday discussion about PLM usability made me think more about how to come with ideas to improve usability of PLM...

5 January, 2016

Almost hundred years ago Henry Ford started to build River Rouge factory. That was a giant plant that literally took...

10 September, 2021

While cloud and the internet are a way to live and breathe for many of us, the adoption of SaaS...

Blogroll

To the top