Happy New Year!
I took a short break over the last 10 days to recharge, spend time with my family, read, and reflect before diving into 2026. Those pauses are rare during the year, but they are often the moments when things start to connect more clearly. Today, I want to share the main topic and agenda I plan to focus on in 2026 on Beyond PLM, which continues to be my personal “thinking tank” for exploring the future of PLM.
I also want to take a moment to thank everyone who reads my articles, comments on posts, and engages in discussions with me. I genuinely appreciate the conversations and the different perspectives you bring. While I remain very busy with OpenBOM, I’m planning to introduce new ways to engage and interact through Beyond PLM in 2026 as well.
2025: When AI Took Over the PLM Conversation
Looking back at 2025, it is hard to remember another year when AI dominated PLM conversations so completely. Copilots appeared everywhere. Chat interfaces became the default demo centerpiece. Every roadmap suddenly included “intelligence,” “assistants,” and “AI-powered” workflows. If you followed PLM conferences, press releases, or LinkedIn discussions, you could easily conclude that PLM had entered a new era.
And yet, when I stepped away from slides and announcements and focused on how engineering and manufacturing teams were actually working, something felt off. The language had changed, but the work had not. Engineers were still manually checking BOMs before release. Manufacturing teams were still nervous about handoffs. Procurement still received spreadsheets that someone had “validated one more time” just in case.
AI visibility went up. Everyday workflow experience stayed stubbornly familiar.
That gap between promise and reality is where this article starts, and it is also the reason I believe 2026 needs a reset in how we think about AI in PLM.
There is no question that 2025 pushed AI to the center of PLM strategy. Vendors rushed to demonstrate AI readiness, and to be fair, some delivered meaningful early capabilities. I wrote extensively about what was actually shipped versus what was announced, and why many efforts still missed the hardest problems:
Building PLM Agents: Why Everyone Is Announcing AI and Why Almost Everyone Is Missing the Point
Most AI efforts in PLM focused on adding a new layer on top of existing systems. Copilots were designed to answer questions. Assistants promised to summarize, explain, or generate content. Chat interfaces were positioned as a friendlier way to interact with complex systems.
From a technology perspective, this made sense. Large language models had matured enough to be useful. The tooling was available. The pressure to “do something with AI” was intense.
But PLM workflows are not limited by how fast people can ask questions. They are limited by how much friction is embedded in data and processes that evolved long before AI was an option.
This is why, despite all the AI noise, daily work largely stayed the same.
What Customers Really Wanted from AI in PLM
Throughout 2025, I spent a lot of time talking with customers specifically about AI—not in the abstract, but in the context of BOMs, PLM workflows, and day-to-day execution. I also summarized these conversations in a dedicated article after explicitly asking customers how they think about AI in BOM and PLM:
What We Learned Asking Customers About AI for BOM and PLM
What I heard was remarkably consistent.
Customers were not asking for AI features. They were asking for relief.
They talked about fragile BOMs that required manual cleanup before every release. They talked about stressful handoffs between engineering, manufacturing, and procurement. They talked about Excel files that had to be validated before sending, because nobody trusted the data enough to let it go uninspected. They talked about the mental load of being the last person in the chain who might catch a mistake.
Many customers hoped AI would magically solve these problems. But what they were really describing was not a lack of intelligence. It was persistent friction baked into existing workflows.
The expectation that AI would “fix it” was less about futuristic ambition and more about fatigue.
The Real Lesson of 2025: Friction Matters More Than Intelligence
If I had to distill one lesson from 2025, it would be this: the biggest opportunity for AI in PLM is not to be smarter, but to make work lighter.
We tend to frame AI progress in terms of intelligence. Smarter answers. Better predictions. More autonomy. But intelligence is not what slows PLM workflows down.
Friction does.
Friction lives in data that cannot be trusted without human verification. It lives in workflows designed around caution rather than confidence. It lives in activities people perform not because they add value, but because they are afraid of what might happen if they don’t.
This is why “more intelligence” is not the lowest hanging fruit. Clever AI layered on top of fragile workflows simply makes fragility more visible. Confidence, on the other hand, changes behavior.
When AI works well, people don’t notice how intelligent it is. They notice that work feels lighter. They stop double-checking. They stop pausing releases. They stop carrying responsibility that never should have been theirs in the first place.
That is a very different success metric than “AI answered my question.”
Why AI Copilots Are Not the Starting Point
At this point, it is important to acknowledge a reasonable counter-argument. Copilots do matter. They can be genuinely useful. But they come with an important condition that is often ignored.
Copilots depend entirely on context.
Large language models do not magically understand product data. They rely on the context we provide. If that context is incomplete, inconsistent, or fragile, copilots amplify noise as efficiently as they amplify signal.
On broken data, copilots increase cognitive load. Instead of asking “is this data correct?”, users now ask, “is the AI correct about incorrect data?”. That is not progress.
This is why copilots work best after data foundations are fixed. When data is structured, connected, and trustworthy, copilots become a powerful interface layer. They help users explore complex structures, explain relationships, and reason about changes.
But copilots are not a fix for broken workflows, and they are certainly not a fix for broken data. Treating them as such only postpones the real work.
And broken data, not lack of AI, is what held most workflows back in 2025.
Agentic AI in PLM: Responsibility, Not Autonomy
This brings me to agentic AI, a term that became popular in 2025 and was also widely misunderstood.
Agentic AI is not about autonomy. It is not about replacing human decisions. And it is not about building impressive automation demos.
Agentic AI is about responsibility.
Specifically, it is about taking responsibility for parts of workflows that require manual work—sometimes continuous attention, consistency checks, and vigilance. These are exactly the tasks humans are bad at sustaining over time.
In a PLM context, this shows up in very practical ways: monitoring EBOM and MBOM alignment, watching the impact of changes across structures, preventing premature releases, and reducing the number of “just in case” checks that accumulate over time.
None of this removes human judgment. It removes unnecessary burden.
Agentic AI does not replace decision-makers. It creates conditions where decisions can be made with less stress and fewer surprises.
That distinction matters more than any technical definition.
Why Agentic AI Requires Product Memory
There is another critical point that became clear in 2025. Agents cannot operate correctly on fragmented data.
They do not work on files. They do not reason over exports. They cannot infer intent from disconnected spreadsheets.
Agents need complete context. They need relationships, states, dependencies, and history. They need to understand not just what the data is, but how it fits together and how it evolves.
This is what I increasingly refer to as product memory.
Product memory is what allows agents to reason instead of guess. Without it, agents generate noise and false positives. With it, they quietly disappear into workflows, doing their job without demanding attention.
Ironically, the better agentic AI works, the less visible it becomes—and that is exactly what we should want.
What 2026 Demands from PLM Workflows
If 2025 was about adding AI layers, 2026 needs to be about something harder and more fundamental.
2026 will be about rethinking PLM workflows from first principles.
Not optimizing them. Not automating them blindly. But redesigning them with a clear understanding of what humans should do and what AI should take responsibility for.
The key shift is not technical. It is conceptual.
We need to move from asking, “What AI features can we add?” to asking, “Which parts of these workflows should humans no longer be responsible for?”
Many workflow steps exist only because data is fragile. Validating Excel before sending it to procurement is not a value-adding activity. It is a symptom of mistrust.
In 2026, AI should take responsibility for consistency, readiness, and vigilance. Humans should focus on judgment, design, and decisions.
This implies a different set of expectations: less AI theater, fewer features, more invisible assistance, lighter and more granular workflows, and systems that work for people to achieve goals rather than systems that demand attention to function.
This is not about doing more with AI. It is about carrying less.
What is My Conclusion?
Where Beyond PLM Goes in 2026? This is probably the question you want answered on January 3rd, 2026. I believe this year will be the year PLM stops talking about how AI will change PLM and starts feeling different to work with. It is about changing the workflow and bringing data in.
Beyond PLM was always ahead in the exploration of what is possible in every sense – tech, product, go to market, sales, etc. That will be the dominant theme of Beyond PLM in 2026 – I want to explore what is possible, to check the limits, to bring people to talk about it and help you to find all answers about the intersection of PLM and AI carefully tuned for your quick consumption.
I will explore what that actually means: product memory, agentic workflows, human-centered PLM, and, perhaps most importantly, why PLM needs to change its mental model if it wants to stay relevant and expand in the AI era.
This is not about predicting the future. It is about paying attention to what 2025 already taught us and having the discipline to respond thoughtfully.
The conversation is just beginning.
Just my thoughts…
Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.
