A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

How to Make AI Work for PLM: It’s All About Structured BOM Data

How to Make AI Work for PLM: It’s All About Structured BOM Data
Oleg
Oleg
3 August, 2025 | 8 min for reading

It doesn’t matter what you do these days, your social media and news stream is overwhelmed with AI. Artificial Intelligence is making its way into engineering, product development, and manufacturing. Every day you can see examples of products and companies to come to solve the problem using AI. 

I can tell you even more- every day, I’ve been contacted by customers and prospects that are asking about when we do “AI” that will solve all their problems – from organizing information, finding better solutions, configuring software and automating manual tasks. It is okay – we are at the top of the hype cycle for AI. From automated document summarization to co-pilots embedded in design tools, the momentum is real. But as AI tools become more accessible, one uncomfortable truth is surfacing across the industry:

AI doesn’t actually do much, unless your data is ready for it.

And in the context of Product Lifecycle Management (PLM), I can see it as the ability to figure out how to deal with Bill of Materials (BOM) data. What means “ready” for product (BOM) data? It means structured. Contextual. Tokenized. Machine-understandable.

Product data is fragmented across Excel sheets, buried in PDFs, or trapped inside disconnected PDM vaults and ERP systems. That’s not a technology limitation. it’s a data architecture problem. The data is siloed and disconnected. And it’s why no matter what you plan to build with your AI project, the results are unlikely to bring some level of satisfaction and very often lead to something unpredictable that underperforms or fails completely in PLM environments.

In my blog today, I want to share a few ideas about how to think about AI solution from a data perspective. This is where I see a big gap for the moment in the understanding of how to work with product structures (BOMs) and related information. 

AI Loves Documents But Struggles with Excels

Large Language Models (LLMs) like GPT and Claude excel with text because natural language has structure. Sentences carry syntax, meaning, and context. Language is inherently tokenizable—words and phrases can be broken down into units that models can embed, interpret, and respond to intelligently.

So when you feed an LLM a requirements document or a product spec written in plain English, it performs well. It can summarize it. Ask questions about it. Even suggest edits or extract key concepts.

But product data, especially BOMs, is rarely in that form.

More often, BOMs live in spreadsheets and/or enterprise systems. They’re either flat (with implicit semantics) or they contain proprietary data models (eg. PLM system) with part numbers, descriptions, quantities, links, and version IDs. They’re not semantic. They don’t carry relationships. And they certainly don’t tell a story an AI can easily follow.

So, when you feed that spreadsheet to an AI tool, it doesn’t understand what it’s looking at. It doesn’t know which parts belong to which assemblies, what alternates are allowed, or which versions are active. It can’t differentiate between a part and an assembly, or between a preferred vendor and a legacy supplier. The result? Confusion. Or worse—confident-sounding but incorrect answers.

Tokenization: The Gateway to Intelligent Automation

To make AI effective, product data must be tokenized, not in the linguistic sense, but structurally. This means representing each product element (part, assembly, revision, supplier, configuration) as an object with clearly defined attributes and relationships.

For example:

  • A motor isn’t just a row in a table -it’s an object with mass, dimensions, cost, revision history, and sourcing rules.
  • A subassembly isn’t just a group of rows- it’s a node with child components, attached drawings, and downstream dependencies.
  • A change request isn’t a document – it’s a structured object (or event) with links to affected items, lifecycle transitions, and review workflows.

When you tokenize BOMs and product data in this way, you allow AI systems to interpret, traverse, and reason over them. You give meaning to the data and that’s what makes intelligent processing possible.

Why Graphs Are Essential for Multi-View BOM Architecture

Here’s the brutal truth most traditional engineers managing product information using multiple enterprise systems and many Excels avoid: spreadsheets can’t model relationships. Tables can’t handle perspectives.

When you’re managing multiple BOM views such as engineering, manufacturing, service, procurement, you’re not just tracking parts and quantities. You’re navigating a complex, evolving web of relationships, dependencies, and contexts. These views don’t exist in isolation – they overlap, diverge, and update on different schedules.

That’s why the future of multi BOM type architecture lies in the adoption of graph-based architecture. 

In a graph data model, each item is a node, and its relationships – assemblies, revisions, alternates, sourcing options, operations are edges. This creates a flexible, semantic network that can:

  • Represent nested hierarchies between assemblies and components
  • Capture design and manufacturing variations (e.g., EBOM vs. MBOM differences)
  • Track dependencies and propagate changes through related views
  • Model alternates, substitutes, and configuration-specific behaviors
  • Represent bill of process (BOP) workflows and effectivity windows

With a graph, you’re no longer stuck flattening context into static rows or duplicating data to maintain multiple BOM types. Instead, each node carries meaning. Each relationship is preserved. And the system becomes traversable, queryable, and explainable.

This is the missing layer that makes multi BOM types not only manageable, but scalable and AI-ready.

One Product, Many BOMs – One Graph Behind It All

Imagine a single product represented in multiple BOM Types:

  • The EBOM captures design intent and engineering
  • The MBOM defines, supply chain and manufacturing planning 
  • The Production BOM aligns with MES and work instructions.
  • The Service BOM reflects delivered configurations and serialized components.

All these BOM typs might contain the same part number—but each has a different context: where it appears, how it’s used, what constraints apply, which effectivity window governs it, and who owns the data.

A graph model allows you to represent this divergence without data duplication, while maintaining traceability and consistency across all views. It enables system-level queries like:

  • “What EBOM change will affect the MBOM used in Plant B?”
  • “Which components appear in all three views: EBOM, MBOM, and Service BOM?”
  • “Where is Part-1234 used as a substitute in service only, but not in production?”
  • “What downstream objects will be affected if Component-X is deprecated?”

Try asking those questions from within an Excel BOM. You won’t get far. Try to run these queries across multiple enterprise systems – you will be stuck for a long time synchronizing and aligning multiple proprietary models. 

Product Knowledge Graphs Power AI Reasoning

AI thrives on relationships and context and that’s exactly what graphs provide. When product data is structured as a graph:

  • AI agents can traverse product structures and understand lineage
  • Queries become semantic, not just keyword or ID-based
  • Embeddings and vector searches can incorporate contextual signals
  • Simulations and validations can model real-world dependencies

So, let me summarize it –  a graph transforms product data from something AI can read to something AI can reason over.

It’s the difference between having an index of parts and having a living, navigable map of the entire product ecosystem.

Structure Before Intelligence

There’s a lot of excitement right now about vector databases, semantic search, and RAG (retrieval-augmented generation). These are powerful tools. But they rely on one key assumption: the data they retrieve is structured enough to be meaningful when embedded and queried.

This is where many organizations need to acknowledge the work needs to be done. You should not skip the work of structuring their product data before going to the ideas of AI agents. Plugging BOM Excel export to language models and expect design feedback or cost optimization insights would be premature and the answers will be very vague – wrong answers, hallucinations, or meaningless summaries.

Why? Because you can’t vectorize what you haven’t tokenized with correct semantic. And you can’t tokenize what you haven’t structured. If you want AI to deliver value which is real, system-level reasoning about your products, you need to start at the data layer.

What is my conclusion? 

Data comes first, then intelligence follows. AI is not a shortcut. It works very powerfully on the data that is semantically meaningful – text documents. But to make it happen organizations need to figure out how to tokenize that data. 

If your BOM is just an Excel file, it won’t scale. If your product structure is just a file folder hierarchy, it won’t adapt. If your data is located in 5 different systems, AI won’t magically connect them unless you will figure out how to tokenize the data and embed their semantics of relationships.  

But if your data is structured, connected, and semantic? Then AI becomes what it’s meant to be—a tool for insight, automation, and intelligent action.

Because in the end, you can’t automate what you can’t model.  And you can’t model what you haven’t structured.

Just my thoughts… 

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased

Want to explore how OpenBOM is enabling intelligent, multi-view BOMs and AI-enhanced workflows? Visit www.openbom.com or dive into the OpenBOM blog for real-world examples.

Recent Posts

Also on BeyondPLM

4 6
24 March, 2010

I came across to some interesting articles related to what in modern language called UX (User Experience). Articles are from...

15 May, 2020

Database and data management technology are going through a Cambrian explosion of different options and flavors. It is a result...

25 November, 2019

I’m still digesting tons of information I’ve got last week during AU2019 in Las Vegas. I will continue to share...

8 March, 2013

I’d like to provoke the discussion about PLM implementations today. I assume most of your had a chance to hear...

9 July, 2014

Navigate back into histories of CAD and PLM companies. You can find significant involvement of large aerospace, automotive and industrial...

14 October, 2020

There is a bad word in PLM jargon and it is “customizations”. For years, customization was considered as a bad...

30 December, 2013

End of the year is traditionally associated with “top stories” and “next year predictions”. So, it is hard to resist…...

31 March, 2021

I had a conversion earlier this week about the BOM revision process. I used the abbreviation “FFF” in my response...

31 August, 2016

Have you heard about “true cloud” debates? Things are heating up in ERP universe… I wrote about Oracle acquiring Netsuite...

Blogroll

To the top