It’s amazing to think about the concept of memory in modern AI models. These large language models “read” all publicly available sources—websites, books, articles—and provide an incredible gateway to knowledge. I use it every day. “Talking to AI” is quickly becoming a normal part of our lives.
But what does this mean for enterprise companies and businesses? What happens to business memory? What does a manufacturing company know about the products it builds? Where is that knowledge stored, and how is it captured and preserved?
I recently came across a compelling article by Yaniv Golan, general partner at LooL ventures, titled “The Corporate Memory Imperative for Enterprise AI.“ Long time ago Yaniv and myself worked together at SmarTeam. I recommend you to check the article – it brings lot of important points and examples for anyone who is working in the enterprise software these days. The article hits close to home for those of us building PLM systems and thinking deeply about enterprise knowledge. The main point is simple but powerful: without memory, AI is just a clever assistant with amnesia. And in the context of modern enterprises, especially those designing and building physical products, this is not just inconvenient, it’s strategically limiting.
The idea of corporate memory isn’t new in PLM. After all, we’ve been storing part numbers, revisions, drawings, change orders, and project notes for decades. This is what PLM called a “single source of truth”. But what Yaniv calls out is the deeper, structural deficiency in how traditional systems (and even most modern SaaS PLM) treat memory—as static records, not dynamic, evolving knowledge.
From my perspective, we’ve entered a new phase of enterprise PLM software evolution where product knowledge is no longer just about storing data, but about building a memory that can grow, adapt, and be reasoned over. And for PLM developers, this has massive implications.
Here is a question I want to discuss today – Why PLM must rethink its role in the age of enterprise AI development?
Let’s Talk About PLM Architecture (and What These Debates Miss)
For the last two weeks, since I published the article Rethinking Monolithic PLM Architecture – Exploring What Comes Next, I found no shortages of debates in the PLM world speaking monolithic vs. modular, out-of-the-box vs. toolbox, low-code vs. standard configuration. These debates have their place, and traditional PLM architectures are still doing a solid job supporting what most companies need today: managing CAD files, tracking revisions, and syncing with ERP. If your goal is to check in files, send a BOM downstream, and control engineering changes, the mature, SQL-backed PLM systems still deliver.
But here’s the thing: I think my colleagues that defending existing PLM technologies and architecture potentially miss the bigger picture. They focus on deployment models and implementation flexibility—but not on data strategy.
The real question manufacturing organizations need to ask is: How are we building our product knowledge memory?
Because regardless of architecture—monolithic or composable, cloud-native or on-prem—the system that will define competitive advantage in the next decade is the one that can capture, connect, and recall everything about your product, across time, context, and organizational boundaries.
That’s the difference between managing documents and building enterprise intelligence. My long time blogging buddy and former SmarTeam colleague, Jos Voskuil speaks about it in a very loud way – we need to move from documents to connected data.
From Snapshots to Connected Knowledge
PLM has historically been about managing “things” such as files, parts, changes, etc. Each these pieces of information were documented and structured within a vault or a database. But real product memory isn’t just about storing snapshots. It’s about understanding the context, evolution, and relationships between those “things.”
Let’s say you ask your AI assistant: “What materials were used in the battery enclosure for an equipment we manufactured in Q2 2021?”
That answer requires more than just a keyword search—it requires understanding revision history, sourcing decisions, compliance constraints, and field performance. That’s memory…
Yaniv highlights how current LLMs, even the most powerful ones, are largely stateless. Once a chat ends, the memory is lost unless explicitly engineered. Vector databases can retrieve relevant chunks of information, but they don’t inherently understand relationships, lineage, or change over time. That’s where Knowledge Graphs comes in—and that’s where PLM needs to head. Check my old article – Why graph knowledge model is a future of manufacturing and product lifecycle?
Product Knowledge Graphs and the Foundation of Future PLM
There are a few companies in the PLM space that I know about that are thinking about product knowledge and design intelligence (maybe there are more, but those are know and talked to them) Colab Software is developing Design Review system, which includes many elements of design knowledge. Another company is Spread AI, which is focusing on how to turn knowledge into a valuable insight using knowledge graphs. At OpenBOM, we’ve been building a flexible, graph-based product model from the beginning. Check our OpenBOM Product Knowledge Graph vision and roadmap article to learn more. A dynamic, connected model of product knowledge that captures not just what something is, but how it came to be, how it evolved, who changed it, and why.
Why does this matter? Because when you layer AI on top of this kind of model, you unlock something powerful: reasoning over memory.
Imagine an AI agent that can:
- Trace sourcing decisions and highlight inconsistencies in supplier data
- Identify that a design decision made two years ago now conflicts with new regulations
- Surface every usage of a deprecated component across all open projects
- Explain how a change impacted delivery schedules and cost
This isn’t a dream—it’s the natural consequence of building a connected product memory model.
Lifecycle Memory as a Strategic Asset
One of the most important points in these discussions and also a very important differentiators of new PLM tech and, in general, of enterprise AI is about Knowledge Lifecycle Management. Memory isn’t useful if it’s stale. It’s worse if you can’t trace it. And it’s nearly useless if it can’t show you the “as of” state of something at a given point in time.
This is where legacy PLM tools and technologies begin to show their age. Built on decades-old SQL schema, many systems are still good at what they were designed to do: manage engineering vaults. But they weren’t built to evolve, learn, or contextualize.
The shift to modern data modeling (and product knowledg memory) to make it referenceable and auditable, requires a different mindset and different tech stack. It’s not just a question of features, but of foundational capability.
Every company I talked to that works in this segment is embracing this challenge – how to build a graph-based, multi-perspective representation of product knowledge designed to evolve over time and integrate across the full digital thread. Like 20-30 years ago, PLM companies were thinking about building flexible PLM data models to build representations of engineering documents, these days, the scope of the data is expanding – product knowledge graph is one of the ways to implement it.
What is my conclusion?
Here is a possible future of PLM as I can predict it today. PLM isn’t just in better workflows or modern interfaces. The main purpose of PLM in the future is to create a product memory. Not just memory of part numbers or file versions—but memory as a first-class citizen of enterprise infrastructure.
If you’re building a PLM strategy today, start with the end in mind. Ask yourself:
- Will our system remember what happened and why?
- Can it track context, not just document numbers and version history?
- Can it grow with our organization and help AI systems reason, not just retrieve?
Because the next generation of PLM won’t be judged by how well it checks in files. It will be judged by how well it remembers your product’s story—and how intelligently it helps you write the next chapter.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased