Something has shifted in my conversations with prospects and customers. It used to come up occasionally. Now it’s hard to ignore. Somewhere in the middle of a discussion about requirements, integrations, or system complexity, someone says it:
“We are looking how to build PLM using AI. Can you help us?”
Sometimes it comes in stronger: “We’re already building our PLM with AI, we’ll come back when we know what components we actually need.” These aren’t fringe opinions anymore. And here’s one more data point that’s hard to dismiss: API requests from teams building on top of OpenBOM are growing. Quietly, but steadily.
Something real is happening here.
AI, Product Memory, and the Future of PLM Architecture
The idea of Product Memory concept came to last year and it immediately triggered many questions about PLM architecture and related topics. The shift towards structured Product Memory is not just conceptual. It brings fundamental questions about direction of engineering data management tools development.
The shift in PLM architecture made me think more seriously about how AI will impact PLM development. Main reasons – because I found some interesting historical parallels and because I see how fast AI is making progress in software development. My thoughts behind it are pointing at something real.
This isn’t just frustration with vendors, although there’s plenty of that. It reflects a genuine shift in how people think about software. For a long time, enterprise systems like PLM were things you bought, configured, and then bent your processes around. Now teams are starting to see software as something they can shape directly — closer to their intent, their terminology, their actual workflows. The gap between “I need this” and “I built this” has narrowed dramatically.
So the question has stopped being hypothetical: can we vibe code the next PLM?
My answer is yes and no. But the interesting part isn’t the answer. It’s where the line sits — and what that tells us about where PLM is actually headed.
Why AI-Driven “Vibe Coding” for PLM Suddenly Feels Real
The idea that customers might “just build what they need” isn’t new. What’s new is that it doesn’t sound crazy.
AI has fundamentally changed the relationship between human intent and working software. Writing code used to mean learning syntax, frameworks, and toolchains. Now it increasingly starts with a sentence:
“Take this BOM, validate part numbers, check supplier availability, flag duplicates, and generate a report.”
And something working appears. Not always production-ready. Not always elegant. But good enough to test, refine, and put in front of a real workflow in hours rather than weeks.
I’ve seen this happening around OpenBOM – an increased demand for integration, exports, APIs, and other functions. Teams are using AI to build small validators, import utilities, custom connectors, workflow assistants – things that would have required dedicated development resources or a vendor service engagement not long ago. Now someone with domain knowledge and some patience can prototype it themselves.
The psychological shift matters as much as the technical one. Customers used to ask: What features does your system have? Now they’re starting to ask: What can I build on top of this, and how fast?
That second question is new. And it changes things.
From AutoLISP to AI: A History of Democratizing Engineering Software Development
This isn’t the first time the barrier between domain expertise and software creation has collapsed. I’ve watched it happen twice in my career.
Early on, AutoLISP in AutoCAD was a revelation. Engineers and designers, not professional programmers, could suddenly automate repetitive tasks (eg. dimension), build custom commands, and encode their own engineering logic directly into their tools. It didn’t require a software team. It required understanding your problem and just enough logic to express it. And people did. There was an explosion of small, practical, messy, useful applications. Not all of them were elegant. Many were local and undocumented. But they solved real problems immediately.
Then Visual Basic did something similar in the business world. It lowered the barrier for building forms, workflows, and integrations. The debates about whether it was “serious” enough for enterprise work were loud and largely beside the point. Visual Basic didn’t replace professional software engineering — it expanded who got to participate in building software and what kinds of problems could get solved quickly.
Both things were true: there was real value created, and there was real fragmentation and technical debt created. At the same time. I’m sure you can bring your own examples and experience.
Vibe coding is the next iteration of this pattern. The abstraction is even higher — it’s not scripting or visual programming, it’s natural language. But the dynamic is the same: more people building more things, faster, with more direct expression of what they actually need.
What AI and Vibe Coding Can Actually Build in PLM and BOM Workflows
Once you see it this way, the idea of “building PLM with AI” gets a lot more concrete.
There are real parts of PLM-adjacent work that are well-suited to this approach. Not entire systems — focused applications, built close to specific problems.
BOM validation and checking is a strong candidate. Detecting inconsistencies, missing data, duplicate parts, non-standard naming — these tasks benefit from context and rules but don’t require reinventing the data model.
Custom dashboards and reports are another natural fit. Engineering managers and operations teams often need very specific views of product data that don’t justify a full development cycle. AI can generate these quickly, tailored to the exact question being asked.
Integration utilities such as mapping data between systems, transforming formats, synchronizing subsets of information are traditionally painful and tedious. AI compresses that work significantly.
Workflow assistants for things like ECO preparation, release readiness checks, or supplier communication sit around the core system rather than replacing it. They add real value without touching the foundations.
These are all what I’d call the “last mile” of PLM – the layer closest to the user, the workflow, and the immediate problem. This is exactly where vibe coding delivers. It turns intent into working tools fast.
Where AI-Generated PLM Applications Start to Break Down
But push further. Ask whether you can vibe code the entire PLM software platform or architecture, and the cracks show up pretty quickly. Everyone who contacted me this way or another was eventually storing data somewhere – CAD system, Notion database, Online spreadsheets, etc.
PLM architecture isn’t just a collection of screens and workflows. At its core, it’s about maintaining consistent, structured, and persistent product knowledge over time. That’s a fundamentally different problem than generating an application.
Product structures aren’t just lists, they’re relationships. Hierarchies, configurations, variants, dependencies that evolve through revisions, change processes, and lifecycle states.
Change management isn’t just a workflow. In the middle of it, you have the propagation of decisions across multiple structures and systems, with traceability and accountability built in.
Configuration logic isn’t just rules. You need to build and encode the engineering intent, constraints, and allowable combinations accumulated over years.
Traceability isn’t just linking records or spreadsheets. You should be able to reconstruct why something changed, what it affected, and how a decision was made, months or years later.
And all of this has to persist. Not for a single session or a single utility. Across time, teams, and systems.
This is where the simplicity of vibe coding hits a wall. You can generate an application. You can generate a workflow. You can even generate a small system.
But generating durable structure – a foundation that remains consistent, interpretable, and reliable over years. This is a different problem entirely from generating a simple app.
Generating an app is easy. Preserving product truth over time is hard. That’s always been the hard part of PLM, whether the system was built by a vendor or assembled from prompts.
PLM as Product Memory: Why Structured Product Knowledge Still Matters
Here’s the thing that I think gets lost in these conversations.
When people talk about “building PLM with AI,” they’re usually imagining recreating the application layer such as the UI, the workflows, the features. But the real value of PLM has never been the application itself. It’s what the application holds together.
The best PLM implementations are attempts to capture and organize product knowledge across the entire lifecycle. Not just the what, but the how and the why.
How components relate to each other. Why a design decision was made. What changed, when, and for what reason. How engineering connects to manufacturing, procurement, and service over time.
This doesn’t emerge from generated code. It requires structure, consistency, and continuity.
This is why I keep coming back to the concept of Product Memory, not as a feature or a product name, but as a foundational idea. Product Memory is the accumulation of product knowledge: data, relationships, decisions, and history, organized in a way that can be understood, reused, and reasoned about over time.
PLM, at its best, is a continuous process to create and maintain Product Memory. And AI-generated applications need something to stand on. Product Memory is that foundation.
The Architecture of AI-Powered PLM: Product Memory, Data Models, and Integration Services
So if vibe coding is real and useful. However, it is limited. So, what does the architecture actually look like?
I don’t think AI rebuilds monolithic PLM from scratch. I also don’t think nothing changes. The realistic picture is something in between, and it’s more interesting than either extreme.
AI increasingly generates small and focused applications, context-specific tools and integrations that evolve and adapt quickly, get replaced often, and are tailored to specific workflows. These aren’t one-off experiments. They become part of how teams actually work.
But they can’t exist in isolation. They need to sit on top of a more stable layer – one that provides a consistent product data model, structured relationships between items, BOMs, documents, and processes, lifecycle and revision control, integration across systems, and access to historical context and decisions. In other words: Product Memory, and the services around it.
The architectural picture that emerges isn’t one large PLM system. It’s a family of generated or semi-generated applications all operating on top of shared product memory and integration services. AI becomes the mechanism for creating and adapting the edges. The center, the memory, the structure, the continuity remains deliberate.
And arguably becomes more important, not less. Because without that center, AI-generated applications risk becoming the next generation of spreadsheets: useful individually, disconnected collectively.
A Practical Roadmap: Where to Use AI in PLM vs. Where Structure Is Required
For customers, this isn’t theoretical. The question is where to use AI and vibe coding, and where to rely on stable systems and services. The way I think about it is in layers.
Start with AI around existing data. Explore search, summarization, anomaly detection, reporting. High value, low risk. These improve visibility without changing structure.
Then move to AI-generated utilities: import tools, validators, data transformation scripts. These are often the most painful and least differentiated parts of PLM work, and they benefit significantly from automation.
From there, you can build AI-assisted process applications. For example specific BOM review assistants, ECO preparation tools, release readiness checks, role-specific workspaces. These are closer to core workflows but still operate around the central data rather than replacing it.
As more applications get generated, the importance of shared structure becomes critical. They need to connect by using the same product definitions, the same relationships, the same lifecycle information. This is where having a consistent product data layer becomes essential, not optional.
The goal isn’t to choose between “build everything” and “buy everything.” It’s to understand what should be generated and what should be stable.
What is my conclusion?
Let’s get back to my original question – can we vibe coding the next PLM? What AI can replace, what it can’t, and why Product Memory wins.
Yes, absolutely. We will vibe code significant parts of the next PLM. Applications, workflows, assistants, integrations that would have been too expensive or too slow to build before. There will be more experimentation, more direct expression of domain knowledge in software, more customization at every level.
But no, we won’t build the entire thing from prompts. Because PLM isn’t just software. It’s structure, continuity, and accumulated product knowledge over time.
The real shift isn’t that AI replaces PLM. The real shift is that AI changes how PLM is built and extended.
The easiest part of the next PLM will be generating the applications. The hard part will still be maintaining the integrity of the product knowledge behind them.
The companies that move fastest won’t be the ones generating the most code. They’ll be the ones that combine AI-generated applications with durable product memory, stable data services, and strong integration foundations.
Because prompts can generate workflows. But they still need something real to stand on.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.
FAQ
Can AI replace PLM systems?
AI can generate applications and workflows, but it cannot replace the structured product data, lifecycle management, and traceability that PLM systems provide.
What is vibe coding in PLM?
Vibe coding refers to using AI to generate applications and workflows directly from intent, such as BOM validation tools, dashboards, or integration scripts.
Where is AI most useful in PLM today?
AI is most effective in validation, reporting, integrations, workflow assistants, and data analysis around existing product data.
What is Product Memory in PLM?
Product Memory is the structured accumulation of product data, relationships, decisions, and history across the lifecycle, enabling traceability and reasoning.
Why can’t PLM be fully generated by AI?
Because PLM requires durable structure—data models, lifecycle continuity, and integration across systems—which cannot be reliably created from prompts alone.
