AI agents are in the air. You see them writing code, answering emails, processing support tickets, creating marketing campaigns, writing blogs and emails. Everywhere you turn, they’re being celebrated as tireless doers, streamlining repetitive work and automating old processes. I think all these examples are excellent, but they demonstrate a linear progression – we use AI to automate tasks.
But here’s the twist: what if there is a bigger opportunity then accelerate tasks using AI automation? My attention was caught by an article speaking about Open AI new consulting offering – OpenAI’s New AI Consulting Service Starts at $10 Million. To me, it sounded like Open AI repeats the Palantir model.
What if I can take all my Excels and load them in the model that later will be able to analyze the supply chain risks or simulate a product cost or supply chain purchasing analysis?
Let’s talk how future PLM can turn around from file vaults and process automation to product memory and intelligence.
In my previous articles—PLM 2035 and Software 3.0 and Thoughts About McKinsey CEO AI Playbook and the Future of PLM Agentic Mesh, I explored the emergence of agentic AI and its impact on PLM systems. While automation and creation using AI are important, from my perspective, they represent just the surface. The deeper, more powerful opportunity lies not in making AI do more things, but in enabling AI to model, observe, reason, and simulate.
How can we build AI “in reverse” starting not from prompts or interfaces, but from the product memory at the heart of tomorrow’s intelligent PLM systems.
A Reverse Use Case: Agentic AI in Action
Imagine a design engineer releases a new revision of a critical part in a mechatronic assembly. Without any prompt, an AI agent tied into the product memory kicks into action. It:
- Analyzes the impact across BOMs of multiple product families
- Flags a supplier who recently issued a shortage warning
- Projects potential delays across three projects in procurement
- Recommends an alternate part used in a similar product last year that met compliance faster and was $4 cheaper per unit
This is not a prompt to do something. The agent was request to simulate the outcome and acted because it understood the product, the process, and the context. This is Agentic AI in reverse—systems that simulate a possible outcome.
Traditional PLM: File Vaults and Database Without Memory
Legacy PLM systems were built on a paradigm of files and databases – vaults storing CAD models, BOMs, and change orders. These systems worked well when engineering lived in isolated domains and workflows were mostly linear. But modern products are not simple mechanical objects. They’re electromechanical, software-defined, supply chain-sensitive, and customer-aware.
The result? Today’s PLM systems act more like secure filing cabinets than intelligent collaborators. They store data, but miss decisions. They track revisions, but miss the why. And they certainly can’t simulate what happens if a supplier changes or a requirement evolves mid-sprint.
To move forward, we must evolve PLM from static data vaults to dynamic reasoning systems.
Product Memory: A Living Knowledge Graph
Enter product memory – the foundation for agentic AI that can be used to “play scenarios”. You can create an instance of the memory and simulate 1000s possible BOM models with suppliers and prediction of “what if” scenarios.
Product memory is not a fancy audit trail. It’s a graph-based, context-rich model that connects the dots between parts, BOMs, revisions, change orders, simulation results, suppliers, maintenance data, customer feedback, and compliance records. It understands not only what the product is, but also how it evolved and why key decisions were made.
Technically, this means:
- Graph nodes represent elements—items, models, specs, change requests, test results
- Edges define relationships—“used in,” “replaces,” “supplied by,” “violates constraint,” etc.
- Temporal versioning preserves the evolution of each element over time
- Semantic metadata adds meaning—allowing agents to reason and simulate
This knowledge graph becomes the substrate over which AI agents operate and not just to search or automate, but to observe, simulate, and optimize.
Agentic Architecture: Building from the Graph Up
To build agentic PLM, we must start backwards—not with tools that “do” but with systems that know.
Graph-Based Product Model
A flexible, extensible, semantically rich graph that captures not only product structure, but also events, dependencies, and lifecycle transitions.
Event Observability Layer
AI agents are embedded within the system and listen for changes—new revisions, supplier disruptions, simulation failures, design approval events.
Simulation-Centric Reasoning
When changes happen, agents don’t execute tasks—they simulate outcomes. What happens to cost, timeline, or compliance if a battery supplier changes?
Memory Recall and Traceability
Agents can traverse the graph to retrieve not only what changed, but why, who approved it, and how it affected prior outcomes.
In short, we don’t focus on how to focus on creation. We build environments and models where agents can learn, simulate, and reason.
Possible Use Cases
BOM AI Agent
A BOM AI agent detects that a capacitor used in three active designs is now on a compliance watchlist in the EU. It references product memory to:
- Identify every assembly using the part
- Check supplier health reports for alternatives
- Suggest replacements previously vetted
- Highlight affected projects and potential cost impact
Maintenance Agent
A simulation agent monitors parts with repeated field failures. By reviewing test logs, change history, and environmental usage data from connected products, it recommends design tweaks to improve durability – all before the next engineering review.
This isn’t just automation – it’s real-time engineering insight.
Technical Takeaways
AI agents and traditional LLMs are stateless – they can generate responses, but they can’t remember or reason across time. The idea is to turn them upside down and embed agents that can simulate and predict using knowledge environments with persistent product memory.
This directly mirrors the PLM world’s shift toward product memory: agents that observe changes, reason over history, and simulate futures—across product lines, release cycles, and business units.
- Start with the graph: Build PLM data models that support semantic richness and relationships.
- Add observability: Let agents detect events and state transitions—not just answer questions.
- Integrate simulation: Embed digital twin logic to simulate lifecycle changes.
- Deliver proactive decisions: Let AI suggest outcomes and mitigate risks—not just respond.
What is my conclusion?
As we head toward PLM 2035 and beyond, the systems we build will no longer be passive archives capturing data changes records. New PLM will be intelligent, evolving environments that remember, reason, and simulate.
The future is not “AI that does.” It’s AI that models and helps us see outcomes before we act. Product memory is the new strategic asset.
Agentic AI in reverse is how we can scale the intelligence across product development—not by doing faster, but by thinking deeper.
So, the question is: are you still prompting to generate codel, blog, marketing campaign or are you modeling and simulating?
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased.