A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

Context Graphs: PLM Beyond Systems of Record

Context Graphs: PLM Beyond Systems of Record
Oleg
Oleg
15 January, 2026 | 10 min for reading

We are moving fast into the new year and I want to continue sharing some of my thoughts and ideas captured during my holiday reading time. The new wave of the AI innovation and news is here. The following article by Jaya GuptaAshu Garg from Foundational Capital caught my during Xmas, but it took me some time to absorb and make sense of this. Here is the article AI’s trillion-dollar opportunity: Context graphs. The article resonated with my recent ideas about Why PLM Leaks to Excel which speaks about limitations of PLM systems and demand for collaboration and capturing activities outside of the formal lifecycle processes.

For most of its history, Product Lifecycle Management has been built around a very clear and reasonable goal: become the system of record for product data. First CAD files, then structured data records, then revisions, lifecycle states, and formal change processes. Over time, PLM systems learned how to manage parts, BOMs, configurations, and approvals with impressive rigor.

But something important never made it into the core architecture. It is about understanding the changes. Also, the rigid structure of PLM made it not friendly for users in organizations. The system is often used as SOR (systems of records), but they leak data to Excels for different tasks where users are looking for a more friendly environment and more efficient communication. 

PLM systems became very good at remembering what changed and when it changed, but they never learned how to remember why decisions were made. The reasoning, the debate, the alternatives that were considered and rejected, the tradeoffs that felt acceptable at the time, the constraints that forced a compromise — all of that usually lives outside the system. It lives in Excel spreadsheets, email threads, meetings, chats, and phone calls. When the decision is finally made, the outcome is imported back into PLM, recorded, versioned, and approved. The thinking that led there is gone.

In my blog today I want to talk about that missing layer. I want to discuss how and why AI exposes this limitation so clearly, why it is not a data quality problem, and why the emerging idea of context graphs points to a practical way to extend PLM beyond systems of record and toward systems of understanding.

PLM Has Always Been About Lifecycle State

PLM was born as a system of record. At the beginning, that meant managing CAD files — checking them in and out, controlling access, tracking revisions. Over time, PLM expanded to include structured objects: parts, BOMs, documents, lifecycle states, and formal change records. Engineering Change Orders became the backbone of how decisions were formalized, approved, and preserved.

In that sense, PLM succeeded. It gave organizations a shared place to store product definitions and enforce discipline around changes. It created traceability between CAD, BOMs, revisions, and releases. It allowed companies to answer questions like “Which revision is released?” and “Who approved this change?”

But even mature PLM systems struggle when the question changes slightly.

Why was this design choice made?
Why was this supplier approved this time but rejected before?
Why did we accept this deviation instead of redesigning the part?

PLM usually cannot answer these questions, not because the data is missing, but because the reasoning was never captured. PLM remembers outcomes — records — not thinking. Engineers and managers recognize this gap immediately, because they live with it every day. When someone new joins a project or revisits an old decision, they often have to reconstruct the story from fragments: an ECO description, a revision note, maybe a comment or two. Most of the real context is gone.

AI Pressures PLM to Become Intelligent and Exposes Its Limits

For years, this limitation was tolerated. Humans are remarkably good at navigating ambiguity. Engineers handle exceptions, incomplete data, and conflicting constraints instinctively. They remember why something “felt right” at the time, even if the formal record is thin.

AI agents do not have that luxury. When organizations start asking AI to assist with engineering decisions — proposing changes, validating BOMs, resolving conflicts between engineering and manufacturing — the cracks become visible very quickly. AI asks questions that PLM cannot answer.

Which rule mattered more in this situation?
What precedent applies?
What was known at the time the decision was made?

This is often framed as a data problem, but it is not. The issue is not that PLM lacks parts, BOMs, or revisions. The issue is that PLM lacks the information that was never captured in the first place.

In practice, decision-making routinely escapes PLM. Engineers export EBOMs to Excel to collaborate. Manufacturing reshapes data into MBOMs offline. Suppliers and contractors exchange files and emails. Discussions happen in meetings and chats. Only after alignment is reached does the final decision return to PLM as a revised BOM, an updated CAD model, or an approved ECO.

From PLM’s perspective, the decision appears as a clean state transition. From a human perspective, it was anything but clean. AI doesn’t fail because PLM lacks data. It fails because PLM lacks the context that explains how the data came to be.

Context Graphs: A Missing Layer Between Data and Decisions

This is where the concept of context graphs becomes relevant — not as hype, but as a logical response to a structural gap.

A context graph is a living record of decisions and the reasoning around them. It captures inputs, constraints, policies, exceptions, alternatives, and outcomes, and connects them across objects, people, and time. It is not about defining another “correct” structure. It is about preserving the decision process that led to a structure being accepted as good enough to act on.

It is important to be precise about what context graphs are not. They are not knowledge graphs in the traditional sense, cataloging facts. They are not digital threads that simply link artifacts. They are not another integration layer moving data between systems.

Context graphs are about why something was considered true enough to release, manufacture, or service.

In PLM terms, this matters because engineering decisions are collaborative and rarely binary. ECOs are not just approvals. Every ECO encodes tradeoffs, risk acceptance, and local reasoning. Alternatives were considered and rejected. Constraints changed mid-process. Supplier feedback altered assumptions. Manufacturing realities forced compromises.

Today, that context lives in emails, meetings, Slack threads, and people’s heads. PLM captures the result, but not the reasoning. A context graph captures the reasoning as a first-class artifact, connected to the objects it explains.

One Product, Many Data Surfaces — and No Rich Shared Context

The limitation becomes especially visible when looking at how product data flows across systems.

Engineering starts with CAD design and translates it into an EBOM expressing engineering intent. Manufacturing restructures that EBOM into an MBOM that reflects production reality and sequencing. Supply chain introduces substitutions and alternates, sometimes reflected in MBOMs, sometimes in ERP or MRP systems. Procurement generates orders. Sales and service teams work with SBOMs for maintenance and support.

Each of these data objects is correct. None of them is wrong. What is missing is the shared context that explains why these structures look the way they do.

Why was this specific EBOM structure produced from CAD, and what alternatives were considered along the way? How did supplier feedback translate into changes to the EBOM, and why were some suggestions accepted while others were rejected? Why was the MBOM reorganized, and why was a particular operation sequence chosen? Why was a component replaced during maintenance, and why was an alternate acceptable in one context but not another?

Traditional PLM focuses on data records, revision history, ECO records, and traceability links between CAD, BOMs, and revisions. Context graphs focus on the decision reasoning that connects those structures.

The real product memory — and the real digital thread — is not only the link between CAD and BOMs. It is the reasoning that led a company to create them, transform them, and release them across different stages of the lifecycle.

Extending PLM as the Natural Home for Context Graphs and Decision Support

If context graphs matter this much, the obvious question is where they should live.

PLM is a natural anchor because it already sits at the intersection of engineering, manufacturing, supply chain, and quality. PLM objects are decision surfaces: part approvals, BOM releases, supplier substitutions, deviation acceptances. These are exactly the moments where reasoning matters most.

Context graphs do not replace PLM objects. They extend them. They connect the object, the change, the rationale, the alternatives considered, and the human activity surrounding the decision. For managers and executives, this matters because it turns PLM into a system that explains why decisions happened, not just that they happened.

This is not about adding more fields or longer descriptions. It is about preserving product and organizational memory — the kind of memory that explains why a supplier was replaced or why a design compromise was accepted under pressure.

From PLM Workflow Engines to Reasoning Collaborative Workspaces

Architecturally, this shift changes how PLM is used. Traditional PLM workflows are workflow approval-driven, and focused on lifecycle state transitions. They work well for enforcing discipline, but poorly for capturing understanding. Context-driven PLM is different. It is non-linear, iterative, collaborative, and exploratory.

Future AI agents need a better environment to capture decisions and will not simply run workflows. They will navigate context. That requires PLM to evolve from workflow engines into reasoning human oriented collaborative workspaces. Workflows become collaborative tasks rather than controllers. States expand to include historical traces of actions and discussions. Decisions and activity histories become first-class entities.

Future PLM systems will not just orchestrate approvals. They will accumulate understanding of how decisions were made.

Product Memory: When Context Outlives the ECO and BOM Revision

Traditional PLM scenario includes the record of revision approved by ECO workflow. But the entire story behind, user activities, comments, tasks, reasoning is left outside and never captured. Engineers use Excel, Emails, Slack, Phone to capture it. But it is lost in emails and meeting notes. 

Context graphs are different. Over time, context graphs accumulate across projects, systems, and teams. They become organizational precedent and institutional memory. They form a reasoning dataset that humans and AI can learn from.

This enables faster onboarding, safer automation, better AI suggestions, and fewer repeated mistakes. New engineers can understand not just what the product is, but why it evolved the way it did. AI can suggest actions based on precedent rather than guessing in isolation.

Product memory is not documents. It is about remembering decisions.

What is my conclusion? 

What Context Graphs means for PLM architecture? Data records (CAD files, Item and BOM revisions) remain necessary, but they are no longer sufficient. Context graphs require temporal awareness, semantic meaning, and human-machine co-creation. AI should sit inside collaborative workspaces, alongside decisions, not just on top of systems of record.

PLM started as a system of record for CAD files. It expanded to structured data and multiple BOMs. It controls formal change outcomes. But it still fails to capture the human interaction that happens outside the system.

Expanding PLM beyond systems of record — by combining context graph data models with collaborative workspaces — allows organizations to capture decision history, preserve knowledge, and finally turn PLM into a place where real work happens.

That is not a replacement for PLM. It is its next evolution.

Just my thoughts… 

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.

Recent Posts

Also on BeyondPLM

4 6
5 June, 2020

Happy Friday! It is a dream day and I want to continue the article I wrote a few months ago...

12 August, 2016

Flexibility was one of the key elements always demanded by engineering and manufacturing applications. The diversity of requirements is high...

30 August, 2010

I read the following article “Oracle v Google: Why?“. I found it as a very deep analysis of the latest...

10 December, 2015

Earlier this year I wrote about 5 important things about PLM for hardware startup. It came down to management of...

7 October, 2013

Do you remember live without Google? It was only 15 years ago. You should agree, information was less available back...

15 December, 2016

Connectivity is changing demands of manufacturing systems about software. A decade ago, the top concern was how to control data....

10 September, 2010

I’m going to join this first COFES Russia event on 21th September. Earlier this week David Levin of LEDAS announced his vision...

6 July, 2009

I was reading an amazing post by Larry Cheng: Relative Value vs. Absolute Value. This made me think  about how...

22 February, 2012

I’m just few hours before two-day marathon of PLM Innovation 2012 in Munich. The list of speakers and the agenda...

Blogroll

To the top