A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

PLM as the Next Operational System: Goodbye Docs, Hello Collaborative Executable Artifacts

PLM as the Next Operational System: Goodbye Docs, Hello Collaborative Executable Artifacts
Oleg
Oleg
7 September, 2025 | 13 min for reading

For decades, engineering and manufacturing work has worked around documents. CAD files, specs, BOM spreadsheets, ECO PDFs, slide decks for design reviews – these artifacts carried the knowledge, but they also carried friction.

They were useful in a world of handoffs and email threads. But they don’t create a continues space. They don’t check themselves. They don’t connect to the live system of product data.

Every handoff introduced risk, every copy drifted from the truth, and every decision lived inside a silo. PLM developed as mostly engineering environment not exposed to the rest of the company and became very engineering/CAD centric.

So called “PDM” (and later PLM) databases where the place where magic supposed to happen – the data became connected. Although, it created some sort of order, the disconnect continued. The best created PLM platforms were monolithic set of tools that require a lot of work and not a very easy way to connect disciplines, teams, and companies.

The shift underway is simple to state and big in impact: the unit of work is moving from documents to collaborative executable artifacts – those are living surfaces that can run queries, validate rules, trigger workflows, and coordinate people and machines in real time. This is more than a convenience; from my perspective, it’s a foundational change in how PLM will evolve into the next true operational system.

This is where PLM must go next: from a file-centric database record system to the operational system of product work, powered by executable artifacts on top of a shared, graph-based open product data platform.

Let’s talk about it in more details…

From documents to executable artifacts

I recently came across a talk called Goodbye Docs, Hello Executable Artifacts. The core idea is that non-coders can now create interactive artifacts that collapse the chain from idea → spec → tool → result. Instead of writing a plan about what should happen, you produce a surface that actually does the work—runs the calculation, syncs the data, generates the view, sends the request. YouTube

The message hit home. Instead of writing a plan, you build an artifact that runs the calculation, checks the rules, and triggers the workflow. It’s not just describing the work—it’s executing it.

This resonates so much with PLM. Think about the handoffs we all know too well: engineering exports an EBOM to Excel, procurement massages it into ERP, manufacturing reviews it in yet another tool. By the time everyone looks at it, the information is already wrong. Executable artifacts solve this by connecting directly to live data and letting people collaborate in real time.

Proof points: the stack is reorganizing around “live work”

Here are a few examples that made think about the trend:

  • AI work surfaces are becoming mainstream. Anthropic’s Artifacts turned chat outputs into editable, runnable workspaces—mini apps, docs, and tools you can iterate on side-by-side with the model. It reframes AI from a “chat box” into a co-building environment. Anthropic
  • OpenAI’s Canvas pushes in the same direction: a shared window where you and the model write, refactor, test, and ship work together. This isn’t a doc editor—it’s a collaboration surface that can contain code, data, and logic. OpenAI
  • The browser itself is becoming the operations console. Atlassian’s acquisition of The Browser Company (makers of Arc and the AI-centric Dia browser) is a loud signal: the front end of knowledge work is moving to an AI-native, workflow-aware browser that reaches into all your SaaS tools. That’s not a “nice browser.” That’s the shell where work executes. AtlassianThe Verge
  • Even the OS is shifting. Microsoft’s “Windows 2030” vision openly describes an AI-first user experience with agentic behavior embedded below the app layer. The platform is converging toward instruments that perceive context, act on intent, and coordinate tasks. Windows Central

If you connect these dots, the pattern is clear: work migrates from static artifacts to live, executable instruments that sit on top of trusted, connected data.

What is a “collaborative executable artifact” (for PLM people)?

For me, this is where things get exciting. Imagine a BOM page or an ECO form, not as a static document or screen, but as a living instrument. It connects directly to the product knowledge graph. It knows relationships, revisions, suppliers, alternates. It has rules built in – cost rollups, sourcing checks, compliance validations. And it supports collaboration right there: comments, approvals, assignments, all tied to the data.

This is the leap PLM needs to make. From being a system of record to being an operational system of work. Think of it as a living surface that combines:

  • Data views bound to live product graph nodes (items, revisions, alternates, suppliers, routings).
  • Logic (rules, scripts, constraints, calculations).
  • Workflows (approvals, notifications, procure/build actions).
  • Collaboration (comments, assignments, traceable conversations).
  • APIs/agents to pull/push from Excels, CAD, PDM, ERP, MES, procurement portals, and supplier data.

It looks like a BOM page, ECO screen, or supplier quote sheet—but it executes: it validates BOM consistency, explodes configurations, checks alternates against AVL, queries inventory, estimates lead time, and opens a change request with the right approvers pre-assigned.

Why static “docs” places in the “spreadsheets” and “files” fail modern product work?

  1. Handoffs multiply errors. A spreadsheet exported from CAD, pasted into email, and copied to ERP is instantly out of date.
  2. Context dies at the boundary. A PDF ECO can’t “see” the MBOM or supplier lead times; it can’t enforce anything.
  3. Work is not testable. You can review a document; you can’t run it.
  4. No memory. Docs don’t accumulate operational knowledge; they spread it thinner.

This is a place where supporters of proven and tested PLM technologies come and say: “PLM software can do it today, but people (!) are not ready and need to change.” There’s something in this that I agree with—managing change is indeed important. Helena Gutierrez from Share PLM speaks about it for more than 5 years.

At the same time, all mature PLM technologies are two generations behind the change:

  1. They are built on 1990s database technologies (SQL), monolithic in nature, fragile, and unable to scale beyond a single organization. Check my article Rethinking PLM Monolithic Architecture to learn more.
  2. They face a widening gap with recent AI developments, which grows larger every day. Simply applying AI on top of outdated PLM won’t work—check my article for more details.

In my article today, I want to share how I see the growing vision and technology of executable artifacts fixing the limitations of existing PLM technologies and document-driven approaches. Executable artifacts create live, collaborative workspaces connected to real systems, adding a new intelligent layer that supports decision-making and enforces product development constraints right at the point of action.

PLM as the operational system: what it must do next

To earn the “operational system” role, PLM needs to supply three foundations and expose them as executable surfaces:

Product Knowledge Graph (I like to call it xBOM data backbone)

  • Reference–instance structures across EBOM/MBOM/SBOM/Service BOM, variants, and effectivity—all addressable as graph objects.
  • Rich object references (vendors, alternates, documents, files, specs, compliance).
  • Time/version semantics and lineage so that every surface is traceable by design.

Policy & computation close to the data

  • Rules engines for part numbering, change gates, sourcing constraints, quality checks.
  • Domain calculators: cost rollups, yield/throughput, configuration explosion, carbon footprint estimators.
  • Eventing: “when ECO moves to ‘Approved’, notify sourcing and open PO suggestions.”

Agent & API

  • First-class API endpoints and tool adapters (CAD, ERP, MES, MRP, SCM, QMS).
  • Agents that read/write the graph, run playbooks (“Where-Used for safety-critical alternates”), and negotiate with external systems (supplier portals, inventory services).
  • Security, audit, and observability built in.

If you recognize some of my previous article – How to make AI work for PLM writing here, you’re not wrong: multi-tenant cloud, graph-linked xBOM, file understanding, and agentic workflows are exactly the ingredients you need to make artifacts executable rather than “view-only” documents

Concrete PLM scenarios (from docs → executable artifacts)

Take “Where-Used.” Traditionally, it’s just a query (often delivered as a report) used for impact analysis in processes like ECOs. Giving everyone access to this information can make a real difference. That approach might work for so-called “holistic PLM” implementations, but their adoption in SMB/SME companies is minimal, and even in large enterprises the actual level of “holistic” data support remains questionable. Someone generates it, saves it, and emails it around. By the time it’s read, it’s out of date. In the new model, “Where-Used” is a live artifact. You can ask a question like “Where is capacitor C123 used in assemblies shipping next month?” and it responds in context, applies classifications, proposes alternates, and sets up actions.

Or think about ECOs. Instead of a PDF routed by email, the ECO is a surface that actually executes the change. It validates the impact, updates affected structures, opens tasks, syncs notifications, and captures the entire discussion.

Even sourcing can be turned into a cockpit. Imagine a workspace that binds supplier data, AVL rules, and inventory signals. It runs replenishment logic, flags risks, and drafts purchase orders for review. That’s not a document. That’s the work itself.

Here are a few specific examples I can bring :

  • MBOM readiness check.
    A live surface that ingests EBOM deltas, validates routings and substitutes, flags non-approved suppliers, and proposes MPN alternates with cost/lead-time tradeoffs—then kicks off a targeted change package.
  • Where-Used as a service.
    Not a report—an instrument: type a natural question (“Where is capacitor C123 used in safety-critical assemblies shipping next month?”). The artifact expands context, applies safety classifications, proposes mitigation (alternate + requalification), and schedules a meeting with the right owners.
  • Programmable ECO.
    The ECO isn’t a PDF form, or a specific user interface in tools ALM, PLM, or mix. It’s a collaborative real-time shared surface that executes the change: applies constraints, updates affected structures, opens quality tasks, syncs supplier notifications, and posts back confirmations. The comments and decisions are entangled with the data they impact.
  • Sourcing cockpit.
    A purchasing artifact binds AVL rules, live prices, and inventory signals. It runs replenishment logic, tests resilience (what-if lead time spike), and proposes POs with approvals.

Each of these replaces a document bundle with a shared, traceable, and testable surface that actually does the job.

Why the browser story matters to PLM

This is why Atlassian’s acquisition really caught my eye –  AtlassianThe Verge. Atlassian buying The Browser Company (for Dia, the AI browser) is more than a headline. It suggests that the work shell—the place where people live all day—will integrate AI, context, and multi-app workflows right in the browser. 

For PLM, that means the executable artifact can be delivered in the user’s native surface without forcing them through five portals. It’s a new distribution channel for operational PLM: put the artifact where the work lives.

If the browser becomes the hub for AI-native, executable surfaces, then PLM doesn’t need to force users into yet another portal. These artifacts can live right inside the environment where people already spend their day. In my view, this is an opportunity for PLM to finally break free from “yet another system” and become part of the natural flow of work.

From AI “assistants” to instruments of work

The last two years, we see an ocean of “co-pilots” and “AI assistants”. It is a very good first steps. Early AI assistants wrote drafts, assisted to create emails, check responses. The are useful, helps to improve the productivity, but still doc-centric and end up with the “copy/paste”. Artifacts and Canvas evolve that into instruments: the model helps you build a tool that stays in place, bound to the data and the team. The artifact endures; it’s part of the operational memory. Anthropic Artifacts .

Microsoft’s OS vision reinforces the same end-state at the platform layer: agentic behaviors living beneath apps, orchestrating tasks and policies continuously. PLM should meet that wave by exposing well-typed, observable product operations that agents can call safely – check more here Microsoft OS vision for Windows Central

If PLM exposes its data and workflows properly—through open APIs, graphs, and agent-ready surfaces—it can become the backbone of these instruments. That’s how PLM can graduate from a system-of-record to the actual operational system of the product lifecycle.

What this means for PLM leaders and builders (a short checklist)

So, what can we do today. This is a strategy for PLM software developers and for companies choosing their next PLM software platform and implementing PLM?

  1. Design for artifacts, not pages. Every “screen” should be a programmable instrument: data + logic + workflow.
  2. Treat the xBOM as code. Version it, test it, diff it, enforce policies as code.
  3. Make agents first-class citizens. Provide tools to compose, simulate, and observe agentic playbooks (procurement, quality, service).
  4. Push collaboration into the surface. Comments tied to objects, approvals as state changes, tasks that change data—not side discussions.
  5. Ship via the browser. Meet users in AI-native shells where work already aggregates (Dia, Canvas, Artifacts, etc.). AtlassianAnthropicOpenAI

At first it might sound scary, but you need to think about connected data, separating data from applications, and native browser experience.

A practical near-term roadmap

No one can wait 5 years to develop next gen roadmap in your company these days. That speed is so ’95, you need to start now and here.

  • Begin from capturing multi-disciplinary engineering data in a cohesive way – data first!
  • Start with “Where-Used” and BOM readiness as executable artifacts. They touch engineering, manufacturing, and sourcing – perfect proving grounds.
  • Automate ECO routines as policy-driven instruments.
  • Expose procurement playbooks (AVL checks, alternates, lead-time risk) as callable agent tasks.
  • Instrument everything (events, metrics, lineage) so artifacts are observable and auditable.

Tie all of this to a multi-tenant, graph-based product model so the artifacts are truly collaborative, not per-user and specific customer macros.

What is my conclusion? 

The industry has tried to fix “document chaos” by generating better documents and storing them in PLM databases. These databases eventually became a bottleneck, preventing teams and companies from working faster and making the right decisions. The real leap is to move beyond documents. Executable artifacts are the work, not the description of work. With AI-native surfaces, an product data (xBOM) backbone, and agentic orchestration, PLM can finally graduate from system-of-record to system-of-work—the operational system for the product lifecycle.

Don’t let PLM miss the current platform shift. PLM software vendors largely missed the move to the cloud, but they still have a chance to win the AI game.

Signals from the broader ecosystem—Claude Artifacts, OpenAI Canvas, Microsoft’s OS vision, and Atlassian’s AI browser bet aren’t side stories. They’re the market telling us where work is going. PLM should lead there, not follow.

Just my thoughts… 

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I advocate for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased.

Recent Posts

Also on BeyondPLM

4 6
20 January, 2012

Apple is going to eduction. Bam… It sounds fantastic. Textbooks are on the iPad. I was screening few publications about...

12 October, 2010

I loved this – “Look ma, no hands”. Did you try it in your childhood? Even so, I’m sure it...

2 November, 2011

I want to talk about Bill of Material and integration today. The reason why I’m coming to this topic is...

25 February, 2009

In one of the previous posts , I thought about how it would be possible to use a WIKI engine to...

21 August, 2022

Manufacturing companies are facing a digital transformation. They are looking to modernize their business processes and improve their operations. But...

5 September, 2022

Manufacturing is an industry that has been around for centuries. It is an industry that has seen a lot of...

1 July, 2023

In the fast-paced world of manufacturing business, staying ahead of the competition requires efficient management of product development, manufacturing, and...

28 September, 2016

Engineers don’t like PDM (product data management) and consider it as an unnecessarily evil. At the same time, complexity of...

2 June, 2018

Yesterday, I shared my thoughts about PLM “Always be competitive” that was triggered by blog article of Sami Grönstrand of...

Blogroll

To the top