A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

Are We Losing Product Knowledge Outside PLM?

Are We Losing Product Knowledge Outside PLM?
Oleg
Oleg
19 December, 2025 | 10 min for reading

In a recent LinkedIn post, Adam Keating made a statement that immediately caught my attention. He wrote that in the coming years, knowledge data will become the most valuable engineering asset. 

Knowledge data becomes the most valuable engineering asset. For decades, teams relied on institutional knowledge and single engineering subject matter experts. In 2026, that changes. AI will finally unlock an organization’s design rules, lessons learned, standards, and patterns – and apply them in real time. Every engineer gets the intuition of the 20-year veteran.

According to Adam, he is coming from 100s of conversations with engineering leaders. Looking at the reaction I can see how the community agrees with those statements. Honestly, I do too.

My attention was specifically caught by thinking about another question that kept coming recently in many discussions and conversations – if product knowledge is becoming the most valuable asset, why does it feel like so little of it is actually captured anywhere today? Working with many companies I can see how the data and knowledge is spread around in tons of spreadsheets, cloud drives, local folders, databases, specs, etc. And even when we think it is captured (eg. in PLM software), why does it feel so fragile, so incomplete, and so not easy to achieve?

For more than two decades of working in the PLM industry I see it as an existing problem and status quo. Still, it feels more visible now, not less. Products are more complex. Supply chains are more volatile. Teams are more distributed. Decisions carry more risk. Under those conditions, losing knowledge hurts more than it used to.

And yet, the same pattern repeats itself. The most important decisions still seem to happen outside the systems that are supposed to manage product data – systems of records (eg. PLM) and execution (MES, MRP, ERP). 

In my blog today, I want to discuss this conflict, status quo and what we can do about it. .

Please note, this is not an article about why PLM is broken. It is not an argument for replacing Excel. And it is not a pitch how we need to replace Excel or old PLM tools with new ones. I want to to slow down and look more at how product knowledge is actually created, where it really lives, and why it keeps escaping the systems we expect to contain it.

Product Knowledge Is Not Just Design

When we talk about product knowledge, I notice how quickly the conversation drifts toward things that are easy to point at. Geometry. Drawings. CAD models. Specifications. These artifacts are tangible. They are measurable. They fit easy into existing 3D CAD files, revisions, and lifecycle states.

But at the same time, the design outcomes feel incomplete. Is product knowledge really just about what a product is ‘as designed’ – a bunch of CAD parts, assemblies, PCB files, and (maybe) software these days? Or is it more about why it ended up that way?

In practice, product knowledge includes cost tradeoffs that forced uncomfortable compromises. It includes supplier selections driven by delivery risk rather than price. It includes manufacturability issues that required design concessions. It includes quality risks that were consciously accepted because the schedule could not move. It includes those moments when a team collectively decided that something was “good enough for now,” fully aware that it was not perfect.

Where does that knowledge live?

It rarely shows up cleanly in CAD. Very little of it fits naturally into a specification. And yet, when a product is reused, modified, or audited later, those decisions often matter more than the geometry itself.

I can see how over time that product knowledge is created primarily in moments of friction. When constraints collide. When disciplines disagree. When there is no perfect answer. Those moments are uncomfortable, but they are also where real understanding is formed. Those are “decision moments”. 

Product knowledge is created in tradeoffs, exceptions, and negotiations—not in finished objects.

But the conclusion I always make is that we are notoriously bad at capturing those moments. It leaves an email thread sent to a contractor, previous design revisions triggered by a conversation in the design review, or supply chain crisis that happened six months ago. If that is true, should we really be surprised that our systems struggle to capture it?

What PLM Is Actually Good At

PLM systems were never designed to capture ambiguity and history of communication. They were designed to manage control and preserve historical states – aka revisions. 

At their core, PLM systems excel at recording decisions once they are made. They capture released bills of material, approved revisions, closed change orders, qualified suppliers, and frozen attributes. They provide traceability, auditability, and a single place to understand the official state of a product.

That is not a flaw. It is a strength.

Without this structure, large-scale product development would be nearly impossible. Manufacturing depends on it. Procurement depends on it. Compliance depends on it. I have seen enough organizations without that backbone to know how quickly things fall apart.

But there is a distinction we often gloss over. PLM captures decision snapshots. It tells us what was approved at a specific moment in time. It tells us which version is current. It tells us which supplier is valid. What it usually does not tell us is why alternative paths were rejected, which constraints were most painful, or which risks were accepted reluctantly.

By the time information enters PLM, the uncertainty has already been resolved—or at least hidden. The debate is over. The system reflects a cleaned-up version of reality, stripped of most of the messy context that led there.

What PLM captures is the outcome of decisions at a moment in time, but not the reasoning that produced them.

I often ask teams a simple question: when you look at an old BOM in PLM, can you tell which decisions were easy and which ones almost derailed the project? Most of the time, the answer is no.

Why Excel Always Appears – Sandbox! 

What is Excel’s role in the process of the development and capturing decisions. If PLM keeps the revisions,  Excel’s role becomes much easier to understand.

Excel keeps showing up not because teams love chaos, but because they need a place for unfinished thinking. They need somewhere flexible enough to explore options without committing to them. Somewhere that tolerates uncertainty instead of forcing premature closure.

Before cost targets are finalized, spreadsheets appear. Before suppliers are locked, spreadsheets appear. When procurement proposes substitutions, spreadsheets appear. When manufacturing pushes back on a design assumption, spreadsheets appear. When teams explore what-if scenarios that span engineering, sourcing, and operations, spreadsheets appear again.

Why? Because Excel does not demand that reality be frozen before it is understood. It allows partial information. It allows comparison. It allows quick iteration without workflow overhead.

In practice, Excel becomes the place where cross-disciplinary decisions are worked through precisely because no system was designed to hold them while they are still fluid.

And it is not just Excel. The same role is played by email threads, chat messages, screenshots, slide decks, and meeting notes. These are all temporary spaces where people think together before anything becomes official.

The problem is not that these spaces exist. The problem is that they are transient by design. They are not meant to preserve knowledge over time.

Knowledge Exists Only While People Remember It

As long as the same people stay involved, this informal system works surprisingly well.

Engineers remember why tolerance was relaxed. Buyers remember why a supplier was approved despite higher cost. Manufacturing remembers which parts are fragile. Quality remembers which issues were consciously accepted.

The knowledge lives in people’s heads, reinforced by shared experience and repeated conversations.

But organizations change. And when they do, this model starts to break down.

People move on. New engineers join. Suppliers change ownership. Products are reused in new contexts. Cost pressure returns in the next program. Audits surface questions no one has asked in years.

That is when the fragility becomes obvious. The spreadsheet that once explained the tradeoff is outdated or gone. The email thread cannot be found. The chat conversation is buried under years of noise.

What remains is the PLM record. It looks authoritative. It looks complete. But it does not explain why things are the way they are.

I have seen teams redo analyses that were already done, simply because they could not trust or understand past decisions. I have seen engineers question designs not because they were wrong, but because the rationale was missing. I have seen organizations become slower and more conservative over time, not because they lacked tools, but because they lacked confidence in their own history.

This is how product knowledge erodes quietly.

This Is Not an Engineering Problem

While PLM was always sold as a “knowledge preservation”, I don’t think it really plays that role. It is not an engineering documentation problem and not a PLM configuration problem, or a training problem. But the more I look at it, the less convincing those explanations feel. Product knowledge does not belong to engineering alone. It emerges between disciplines.

Engineering understands what is technically possible. Multiple engineers (especially working on different disciplines, conflict), manufacturing understands what is practical at scale, procurement understands availability and leverage, supply chain understands risk and lead times, quality understands compliance and failure modes, finance understands margins and tradeoffs.

The most consequential product decisions are negotiated between these perspectives. No single discipline owns them, and no single system captures them while they are happening.

This is why simply adding more attributes, more workflows, or more mandatory fields to PLM rarely solves the problem. You can force people to document outcomes, but you cannot force shared understanding after the fact. Product knowledge is created between disciplines, not inside them.

Once we accept that, the persistence of Excel and informal tools starts to look less like bad behavior and more like a signal.

Conclusion: Does AI solve the problem? 

There is a lot of excitement right now around AI, digital threads, and intelligent automation. I share that excitement. But I also worry that we are skipping a necessary step. AI cannot unlock knowledge that was never captured. A digital thread cannot connect reasoning that was never recorded. Automation cannot preserve context that never existed in a durable form. AI (eg. ChatGOT helps me when I put a correct prompt. Before that, it is useless) 

So before we talk about smarter systems, I think we need to ask a simpler question. Where do teams actually work through product decisions together today? Not where decisions are approved and not where data is released – those are PDM, PLM, and ERP systems. But where cost, supply, engineering, manufacturing, and quality tradeoffs are debated while they are still unresolved.

If the honest answer is “mostly in spreadsheets, meetings, and side conversations,” then perhaps Excel is not the problem. Perhaps the real problem is that we have never given product teams a durable, shared space to work through decisions together.

Until that space exists, PLM will continue to store decision snapshots. Excel will continue to absorb chaos. And product knowledge will continue to escape, quietly and consistently.

That gap is not a tooling detail. It feels more like a structural blind spot.

And once you start seeing it that way, it becomes very hard to ignore.

Just my thoughts… 

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, an AI-native Collaborative Digital Thread platform providing connecting engineers and manufacturing teams. 

Recent Posts

Also on BeyondPLM

4 6
7 May, 2010

You can find this post as something unusual. I decided to jump in my PLM Think Tank Time Machine in…...

31 December, 2021

CIMdata’s article The Top Ten PLM News Stories of 2021, brought analyst perspectives on what is trending in the PLM...

12 March, 2012

Three years ago, I published the article – Do We Need Personal PLM? In a nutshell, the idea I was...

15 February, 2019

I’m catching up on my blogging this week after very intense time at Solidworks World 2019. If you missed my...

25 June, 2022

Earlier this week, I attended Autodesk Forge Data Day in Boston. I could not miss the event with such a...

23 July, 2014

These days manufacturing businesses are more connected than ever before. Every manufacturing company (even smallest startup) has a tremendous need...

5 October, 2010

The following blog article drove my attention yesterday: CAD File Management ≠ PLM. The short blog post published by Peter...

27 June, 2012

If you are technology savvy these days, you probably know what is Apache Hadoop. It originally came to us from...

27 August, 2019

Yesterday was my first day at Conx19. Great organization and inspiring speakers. I was reflecting on presentation and discussions. Here...

Blogroll

To the top