A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

From Data Control to Goal Achievement: AI’s Impact on PLM Workflows

From Data Control to Goal Achievement: AI’s Impact on PLM Workflows
Oleg
Oleg
30 November, 2025 | 8 min for reading

I spent some time yesterday watching The Thinking Game, a documentary exploring DeepMind’s work and the broader trajectory of artificial intelligence. One idea in the film resonated strongly with the discussions happening in engineering software today. Rather than focusing on whether AI can “think,” the documentary emphasizes a different, more grounded question: what AI should solve. This distinction is simple but meaningful. It shifts the conversation away from speculation and toward the practical implications of applying AI to real technical and operational problems.

That framing connects directly with the theme of my blog from yesterday, where I wrote about the growing misuse and dilution of the phrase “AI-native PLM.” The industry has started to deploy the term so broadly that it risks losing any connection to what is technically required for AI to support engineering work. This is why I introduced a capability-based model based on three pillars: semantic data modeling, openness, and workflow intelligence. These capabilities form the groundwork for what AI can realistically achieve inside PLM systems.

Today, I want to build on that foundation and explore what these capabilities enable. When they are combined, they point to a much larger structural shift taking place in PLM: a movement from data-control architectures toward systems oriented around achieving engineering, manufacturing, and supply-chain goals. This shift is becoming more visible as AI tools begin to interact with engineering data and engineering workflows in new ways.

We are still early in this transformation. Most PLM vendors, with very few exceptions, are in the exploratory or prototyping stages of applying AI at the architectural level. Customers are also early in their adoption and experimentation. Many of the ideas discussed here represent ongoing research and discovery—not final answers. Still, the directional trends are becoming clearer, and it is worth articulating them in a structured way.

Revisiting the Three Capability Pillars

Before moving into the broader shift, it is useful to recap the three pillars, because they frame the discussion.

The first pillar is semantic data modeling. Without semantic, object-based, multi-view, and relationship-rich data models, AI has nothing meaningful to work with. It cannot reason about design intent, product structure, configurations, engineering logic, or dependencies. Traditional PLM systems built around files, document vaults, and hierarchical folders do not provide the data structures required for AI reasoning.

The second pillar is openness and integration. AI must be able to observe and act. Read/write APIs, event streams, webhooks, and integrated connections to CAD, PLM, PDM, ERP, and procurement systems are essential. In an environment where data is locked behind proprietary interfaces or cannot be accessed in real time, AI becomes limited to surface-level tasks such as interface assistance or passive analysis.

The third pillar is workflow intelligence. This is where AI begins to show practical value—participating in ECO reviews, validating BOMs, analyzing manufacturability, recommending alternates, validating compliance constraints, or coordinating sourcing activities. These workflows are where engineering work happens. This is also where AI will have the most visible impact in the near term.

Screenshot

These pillars describe the capabilities required for AI to deliver meaningful results. But they also lead to a broader transformation that is already underway in PLM.

From Capabilities to Consequences

Once the three pillars are in place, a fundamental shift begins to take shape. The center of gravity in PLM moves away from data control—single sources of truth, vaults, permissions, state transitions—toward systems designed to help teams achieve operational goals.

Engineering teams do not seek PLM systems for their own sake. They seek them because they face challenges that need to be solved: meeting cost targets, preparing customer quotes, completing design reviews, validating BOMs before release, processing ECOs, finding alternates during supply constraints, supporting compliance, ensuring manufacturability, or coordinating procurement. These are the day-to-day tasks that define engineering execution. AI makes these tasks more directly accessible and automatable.

This leads to the concept of goal achievement. In an AI-enabled PLM environment, the system becomes oriented around supporting and accelerating the work teams are trying to complete. Instead of enforcing control over data, the system becomes an operational tool designed to improve outcomes. Goal achievement becomes the unit of value—not data consistency or repository governance.

Workflows are where this shift becomes most visible.

How Workflows Evolve as AI Enters PLM

Traditional PLM workflows were intentionally designed to replicate structured document processes. They moved information from state to state, enforced permissions, and automated predefined sequences. They functioned well in environments where systems lacked intelligence and where human oversight was required to avoid errors.

AI-enabled workflows behave differently. They do not simply enforce rules or move data across preconfigured paths. Instead, they use reasoning to support the goals teams are trying to achieve. They rely on continuous flows of data from multiple systems, not a static centralized repository. They adapt based on constraints, exceptions, and new information. They provide guidance rather than transactional routing.

This is why workflows are becoming the practical location where AI will deliver the earliest value in PLM. They become active systems rather than mechanical sequences. They support engineering decisions instead of simply automating transitions. In this sense, PLM begins to shift from a control-based architecture toward a dynamic orchestration layer that helps teams achieve results.

To understand this transformation more clearly, it helps to examine how each of the three capability pillars powers the shift.

How the Three Pillars Power the Shift Toward Goal Achievement

I’ve been asking myself what truly enables AI to move from assisting tasks to actually achieving goals in PLM. The more I explore it, the clearer it becomes that everything hinges on three fundamental pillars—how we model data, how we connect systems, and how we shape the workflows where real work happens.

Pillar 1: The End of Data Fortresses

Semantic data replaces document-centric vaults and fragmented repositories. When information is structured as objects with relationships, attributes, configurations, and constraints, AI can begin to interpret the data. This eliminates the dependency on hierarchical, file-driven storage and shifts PLM away from rigid data-control architectures.

Without rich, connected data, AI simply cannot reason. Semantic modeling is a prerequisite for any meaningful transformation. It breaks down the traditional fortress mentality and replaces it with a flexible, interpretable data fabric. This is not a cosmetic change; it is a foundational restructuring of what PLM data needs to be in an AI-driven environment.

Pillar 2: Continuous Flow Replaces the “Single Source of Truth”

AI does not operate on the idea of a static single source of truth. It relies on signals, event streams, and context from multiple systems. Engineering data, manufacturing updates, supplier availability, quality alerts, customer requirements, and regulatory information all play a role in decision-making. For AI, truth becomes a computed state, assembled dynamically from multiple sources rather than stored permanently in a single repository.

This is a significant departure from traditional PLM thinking. The SSOT model worked well when systems were isolated, but it becomes restrictive in an environment where AI must synthesize information continuously. Continuous flow becomes the operational model. Under this paradigm, PLM shifts from storing truth to computing it.

Pillar 3: Workflows Become the Product

Once AI can interpret data and act across systems, workflows naturally become the main product. They evolve from support mechanisms into the primary interface for achieving goals. With AI reasoning embedded in workflows, PLM systems stop functioning as data warehouses and begin functioning as orchestration platforms.

Workflows become the place where semantic data and openness converge. They provide context, intelligence, and direction. This is not a marketing adjustment or a user interface improvement. It is the result of architectural changes that shift the role of PLM from management to orchestration. In effect, workflows become the operational product that organizations interact with, while the underlying data and integration layers support the reasoning required to achieve outcomes.

The Architectural Challenge for Legacy PLM Systems

Many legacy PLM systems were architected around assumptions that are difficult to reconcile with AI-driven workflows. The SSOT model creates silos when enforced through proprietary repositories. Data locking and check-in/check-out processes restrict the real-time access AI agents require. And relational database architectures optimized for transactional integrity do not lend themselves to real-time reasoning or continuous data flow.

These issues are structural. They cannot be addressed by layering AI features onto existing platforms. They require a rethinking of PLM architecture from the ground up, beginning with semantic modeling and extending through openness and workflow intelligence. Some vendors are beginning to explore these changes, but the industry is still very early in this transition.

What is my conclusion? 

The insight from The Thinking Game—that the most important question is what AI should solve—offers a strong lens for evaluating PLM’s direction. PLM will increasingly be judged by how effectively it supports intelligent, adaptive, workflow-driven goal achievement across engineering, manufacturing, and supply chain processes. This evolution will require systems designed for data interpretation, continuous flow, and orchestration rather than data storage and control.

There is still uncertainty about how fast this transformation will occur and what forms it will take. The industry is researching, experimenting, and learning. AI capabilities are advancing quickly, but organizational adoption and system redesign will take time. Workflows are emerging as one of the clearest areas where AI can deliver near-term value, and I expect this space to evolve rapidly.

This shift—from data control to goal achievement, from SSOT to continuous flow, from document workflow to reasoning workflow—marks a substantial change in how PLM will be designed and evaluated in the coming years. It is worth following closely and discussing openly.

Just my thoughts…

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. Interested in OpenBOM AI Beta? Check with me about what is the future of Agentic Engineering Workflows.

With extensive experience in federated CAD-PDM and PLM architecture, I advocate for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased.

Recent Posts

Also on BeyondPLM

4 6
5 August, 2015

Products are getting more complex. Sensors, connected devices, cloud software – you can see these elements in almost every hardware product nowadays. Which...

27 November, 2012

Cloud discussions are trending these days. As Autodesk employee, I’m attending Autodesk University 2012 these days in Las Vegas. Autodesk...

15 July, 2009

Short prompt this morning. What do you think about Information Visualization in 3D? Do you see future in this way...

22 January, 2019

Once upon a time, PLM and cloud things were not friends. PLM companies were telling that cloud is not secured...

1 September, 2013

The core fundamental part of every PDM/PLM application is database and related data model. The history of data modeling is...

13 March, 2013

Implementation of enterprise information systems is a long and painful process. It requires a lot of efforts, investments and planning....

27 July, 2016

We live in an interesting time of computing model transformation. Some of us can still remember Mainframe era 40-50 years ago....

11 November, 2009

Few publications around a new company called PLM+, which left stealth mode this week, drove my attention. There is not...

19 May, 2020

Last week I shared my thoughts about What platform is needed for Digital Thread? Thanks for your comments and discussion...

Blogroll

To the top