Q4 2025 is almost here, which makes it the perfect time to start thinking about 2026 roadmaps. It looks like AI will continue to dominate roadmaps and priorities in the year ahead.
AI spending is no longer experimental. Enterprises poured about $24 billion into AI in 2025 and the market is on track to reach $150-170 billion by 2030. But the headline numbers hide a deeper shift: AI-native companies are running at a different speed. Their product cycles are often ten times faster than traditional enterprises because their whole stack was built assuming AI augmentation from day one.
Here is my conclusion from one of my earlier articles – Why “Just Add AI” to PLM Won’t Work, that
Advanced LLMs and agentic AI are powerful, but they’re just tools. True transformation comes from solving the right problems with the right mix of data, process, people, and technology. Before starting any AI project, ask: Is our data ready? Is AI part of our business strategy? Do we have the culture and discipline to sustain it? Build solid foundations, and the tech will deliver value.
For PLM vendors, AI development speed gap is not about working harder or buying more GPUs. It’s architectural. Product-lifecycle workflows are among the most exposed: they span design, engineering, change control, suppliers, compliance, service and maintenance. If PLM platforms remain document-centric repositories while competitors evolve into memory-aware, compliance-ready, hybrid-AI platforms, the gap in customer value will become impossible to close later.
AI Shifts Every PLM Roadmap Must Recognize
During the weekend, I was catching up on AI trends that must be recognized by PLM developments (and not only). Here are big AI inflections that in my view matter for PLM strategy.
AI Compliance
The EU AI Act is already in force for new general-purpose systems and will tighten further in August 2026 for high-risk applications. Several US states (eg. California AI transparency act) are moving just as fast. Penalties can reach six percent of global revenue. For PLM, this isn’t just a checkbox for the legal department — it is a chance to make traceability and explainability native to the platform. Imagine being able to show, in a few clicks, the full digital thread of how a design decision moved through ECO, MBOM release and supplier hand-off. That’s not overhead; that’s strategic infrastructure.
Memory advantage
Early-deployed AI systems that can remember context — not just retrieve files — will develop an institutional knowledge that competitors cannot copy later. A PLM that links CAD revisions, ECO approvals, cost roll-ups, quality events and field-service feedback in a graph-based product-memory layer will, by mid-2026, already “know” a customer’s preferred materials, typical change-order delays, supplier risk patterns and more. Competitors starting a year later can’t fast-forward that learning.
Outcome-based economics.
Industry is quickly becoming skeptical of seat-based licensing. As AI agents mature, companies want to know how many validated changes, approved MBOM roll-ups, or scrap-reduction cycles a platform actually completed. PLM vendors will need to expose task-level telemetry and ROI dashboards so customers can link spending directly to measurable business outcomes.
On-device and hybrid computing
This is an interesting one, which takes me back to the 1990s when CAD moved to PCs. With NPU-equipped laptops arriving, engineers will expect local copilots for sensitive CAD and supplier IP — faster previews, offline DXF generation, privacy-preserving geometry checks — while the heavy reasoning stays in the cloud. PLM agents have to be designed for this hybrid edge-plus-cloud world.
Premium and commodity AI.
Not everyone in an engineering organization needs a Ferrari-class copilot. Real productivity gains often come from the top ten to fifteen percent of power users — chief engineers, compliance managers, sourcing leads — who can orchestrate multi-hour autonomous agents. The rest of the staff often do fine with light copilots for drafting or search. PLM vendors will need to create role-based tiers and help customers train those high-leverage users to delegate effectively to AI.
Turning Compliance from Burden into Moat
By August 2026 the new high-risk AI rules in Europe will be fully active, with similar regulations appearing in the U.S. PLM vendors that can provide audit-ready digital threads — including explainable change workflows and supplier certifications — will not just keep customers out of trouble; they will make their platforms the default compliance backbone for their industries. The first to deliver this by mid-2025 will set the de facto standards that everyone else will have to follow.
Product-Memory Is the Next PLM Differentiator
Memory is not a feature you switch on; it’s an asset that compounds. A PLM that captures how a company designs, approves and builds products learns over time. By linking all those signals into a product-memory graph, the system can anticipate ECO delays, flag risky suppliers or even suggest cost-effective material substitutions. The competitive edge comes from the months of organisational learning that accumulate between now and 2026.
Proving Value with Outcome-Linked ROI
As pricing shifts away from seats, customers will ask new questions: how many validated releases closed this quarter? How much scrap did automated MBOM compares prevent? PLM roadmaps need to instrument every release, approval, cost roll-up and supplier hand-off so the answers are built into the product. This also opens the door for future outcome-linked pricing models — for example, paying per validated engineering change or per approved supplier transfer — something CFOs will appreciate.
Embracing Hybrid Agents and Tiered AI Adoption
Engineers working on sensitive geometry will welcome on-device copilots that run locally yet remain connected to the cloud for heavier reasoning. At the same time, companies will have to be deliberate about who gets the premium autonomous agents. Giving those to the right ten percent of high-impact employees often produces two-to-three-times productivity gains, which is far more valuable than giving every employee a basic copilot. PLM vendors must not only provide the technology but also guidance on this new style of AI-delegation inside engineering teams.
What is my conclusion?
Now is the time to act for PLM AI planning towards 2026 roadmaps- this is a call to act now in Q4/2025.
The competitive lines for 2026 are already being drawn. Decisions made in the next few quarters about data models, compliance infrastructure, memory services and hybrid-agent architecture will determine which PLM platforms build compounding advantages and which spend the following years trying to catch up.
A governance-ready digital thread, a product-memory graph that can feed AI agents, outcome-instrumented workflows and a hybrid edge-aware agent framework are no longer optional features — they are the foundation of a PLM roadmap that can thrive in the AI era.
The message for PLM leaders is simple: start now. The window to accumulate these advantages is closing fast, and the platforms that get there first will own the productivity, compliance and knowledge-graph moats of 2026-2028.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. Interested in OpenBOM AI Agent Beta – check with me about what is the future of Agentic Engineering Workflows.
With extensive experience in federated CAD-PDM and PLM architecture, I advocate for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased.