Every once in a while a technology story reveals something deeper than the headlines suggest.
Recently I followed the public conflict between Anthropic and OpenAI around defense contracts and the deployment of AI models inside government systems. Most of the public discussion revolved around politics, ethics, or the competitive dynamics between the companies. You can read more about Anthropic position and Open AI contract perspective.
But the part that caught my attention had nothing to do with which model was smarter or who secured the larger contract.
What mattered was the realization that the AI systems in question had already become embedded in operational workflows. They were no longer experimental tools or optional analytics services. They had become part of the infrastructure used to support real operational decisions.
Here is the evidence: The Wall Street Journal reported that AI systems were already embedded in operational intelligence workflows during the U.S.–Israeli strikes on Iranian leadership facilities. The Claude model was used for intelligence analysis, target identification, and simulation scenarios. Even after a presidential directive ordered agencies to stop using Anthropic technology, the system reportedly remained in use because it had become deeply integrated into operational workflows. Once a digital system reaches that level of dependency, it stops being just a tool and effectively becomes part of the organization’s operational infrastructure.
Once a digital system reaches that point, the relationship between the organization and the technology changes. Replacing the tool is no longer a simple procurement decision. Teams build workflows around it. Processes adapt to it. Knowledge accumulates inside it. Removing it becomes disruptive not because of licensing or contracts, but because the organization has already internalized it as part of how work gets done.
Here is another perspective on the level of intelligence interconnected with the data we operate these days. An Israeli software developer, Yonatan Back, recently built a tool called StrikeRadar — a real-time dashboard estimating the probability of U.S. strikes on Iran. Check for more here. It is a small but telling example of the capability envelope we are now entering.
When I watched that story unfold, I found myself thinking about a very different domain: product development systems. The same dynamic has been quietly shaping engineering organizations for decades.
In product development, certain digital systems have already crossed that boundary between tool and infrastructure. CAD systems did it first. PLM systems attempted to extend it, but largely failed. And now a new architectural layer may be emerging that could define the next stage of enterprise engineering platforms.
To understand what that layer might look like, it helps to step back and revisit the earlier technological battles that shaped the industry.
The CAD Wars and the Ownership of the Digital Model
The first major platform battle in modern engineering software revolved around a deceptively simple question: who owns the digital model of the product?
During the transition from drawings to 3D design, systems such as CATIA, NX, Pro/ENGINEER, and later SolidWorks and others became central to how engineering organizations defined their products. The digital mock-up (DMU), and simulations replaced physical prototypes as the primary reference point for design collaboration. In aerospace and automotive programs, the digital model of the airplane or vehicle became the central artifact around which design, simulation, and manufacturing planning were organized.
Once an organization standardized on a particular CAD platform, the dependency became enormous. The geometry lived there. Design intent kiind of lived there. Downstream tools for analysis, manufacturing preparation, and supplier collaboration were built around the same environment. Training programs, engineering methods, and supplier ecosystems all adapted to that model.
At that point CAD was no longer simply software used to create geometry. It had become the digital representation of the product itself, which made them very sticky that organizations took a decade to move from one to another (Mercedes Benz examples of moving from CATIA to NX)
But the CAD revolution also revealed a limitation. While CAD systems captured the geometry of the product, they did not capture the broader context of the product lifecycle. They did not explain why certain requirements existed, how engineering decisions were made, why suppliers were selected, or how manufacturing constraints influenced design choices.
The digital model was essential, but it was not sufficient.
The PLM Promise and the Lifecycle Expansion
Product Lifecycle Management emerged as an attempt to expand the digital scope beyond geometry. The idea was straightforward: if CAD captured the design, PLM would capture everything surrounding it.
Requirements, engineering structures, configurations, change processes, manufacturing planning, supplier collaboration, quality records, and service documentation could all be connected through a common lifecycle system. PLM promised to create the single source of truth for product information across the enterprise.
In many ways, PLM delivered real value. It introduced discipline to engineering change processes. It connected CAD models to part records and bill of materials structures. It helped companies structure product releases and maintain traceability across design revisions.
Yet over time another pattern became visible. Many PLM implementations evolved into systems that stored information and managed workflows, but they rarely created a coherent understanding of the product across the enterprise. And, despite all PLM strategies and marketing, those systems are mostly grounded in engineering processes and stuck in PDM.
Organizations ended up with a landscape of specialized systems of record. Each system managed a particular domain of the lifecycle, but none of them captured the full product context. The last decade we’ve seen multiple efforts of building a walled garden of PLM platforms, but the reports about the success of these efforts are mixed. My immediate conclusion is that while PLM extended the reach of digital systems, it did not fully solve the problem of enterprise product understanding.
The Enterprise Filing Cabinet Problem
If we look at how product development actually operates today, the digital landscape resembles a collection of filing cabinets.
Mechanical design lives in CAD systems such as CATIA, SolidWorks, Creo, NX, or Fusion. Electronics design is handled by ECAD tools such as Altium, Cadence, or Mentor. Product structures and revisions may live in PDM or PLM systems. Requirements are often stored in tools like DOORS, Jama, or Polarion, or sometimes in spreadsheets and documents.
Manufacturing planning resides in ERP systems, MES platforms, or specialized process planning tools. Supplier interactions are managed through procurement systems, email threads, and vendor portals. Service history appears in field maintenance platforms or support systems. Quality records are captured in QMS systems.

Beyond these structured tools lies a rich context layer beyond system of records – this is another layer of information that rarely fits neatly into any system. Design rationale often lives in design review meetings, internal discussions, and archived chat conversations. Key decisions may be captured in presentation decks or informal documents. In many cases the most important context lives in the experience of engineers who remember why certain choices were made.
Each system holds a piece of the puzzle. None of them holds the whole picture. PLM vendors came with the idea of digital thread, which makes sense and holds value. But a digital thread that relies on traditional product structures struggle with real digital thread complexity.
The result is that synthesis still depends heavily on people. Experienced engineers and product leaders act as the connective tissue between systems. They remember the relationships between requirements and design decisions. They know which supplier substitutions were temporary and which became permanent. They understand which manufacturing constraints drove certain design compromises. The key thing is to build a digital thread connecting lifecycle information including rich PLM context that is lost in many situations today. .
When those people leave the organization, the data remains but the understanding often disappears.
From Data Storage to Product Memory
For many years the PLM conversation focused on building a single source of truth. The assumption was that if all product data could be stored in a unified system, the enterprise would gain full visibility into the product lifecycle.
But the experience of the past two decades suggests that the challenge is not simply storing data. Most organizations already have vast amounts of product data stored across systems.
What is missing is a persistent, evolving understanding of how that data fits together.
This is where the idea of product memory is emerging as a new PLM strategy. Product memory is not just a repository of files, records, and workflows. It is the accumulated knowledge of how the product evolved over time. The foundation of such a system is product knowledge graph architecture that is coming in modern PLM architectures these days.
Product memory captures the rationale behind engineering decisions, the dependencies between subsystems, the trade-offs between suppliers, and the constraints introduced by manufacturing and service environments. It preserves historical context while continuously updating what is relevant to the current state of the product.
In other words, product memory represents a stateful enterprise-wide product context.

That context does not belong to any single system. It emerges from the relationships between many systems (e.g. engineering, manufacturing, and service BOM structures) and the knowledge accumulated through years of development work.
Building such a context is a very different challenge from building a database.
Intelligence: Understanding Product Context
If a PLM Brain is to exist, it must begin with the ability to interpret product context correctly.
A traditional approach for years was to build enterprise integrations. But those systems technically synchronizing data between systems and not build a reliable contextual understanding – those systems are just a first step in the evolution of PLM digital thread architectures.
Engineering systems contain subtle distinctions that matter deeply in practice. A component might appear similar to another but represent a different lifecycle state or configuration. A substitute part might be approved only for certain plants or customers. A design revision might be released for manufacturing but not yet reflected in service documentation.
Without the ability to interpret these distinctions, any attempt to reason about product data quickly becomes unreliable.
Intelligence in this context does not simply mean artificial intelligence in the popular sense. It means the ability of a system to correctly interpret the meaning of engineering information and its relationships to other elements of the product lifecycle.
A PLM Brain must be capable of distinguishing between superficially similar information and truly relevant context.
Memory, Retrieval, and Execution
Beyond interpretation, a PLM Brain requires three additional capabilities.
First is durable memory. Product knowledge must persist over time, even as teams reorganize and products evolve. Decisions made during early design phases may have consequences years later during manufacturing or service operations. Preserving that knowledge requires a system capable of maintaining historical context while tracking what remains valid in the present.
Second is retrieval. Real engineering questions rarely involve simple data lookups. Instead they involve tracing relationships across time and across multiple enterprise systems.
An engineer might ask why a certain component was substituted during production. The answer could involve a supplier shortage, a reliability issue discovered in testing, a manufacturing constraint identified during pilot production, and a later engineering change that updated the design. Finding the relevant context across these events requires more than search; it requires understanding causal relationships.
Third is execution. Knowledge alone is not sufficient. A PLM Brain must support real operational actions. It must assist in evaluating engineering change impacts, guiding procurement toward the correct BOM revision, identifying affected serial numbers during service events, and validating whether manufacturing plans align with the latest design state.
Execution introduces another requirement: trust. If the system cannot be relied upon to interpret product context accurately, organizations will revert to manual verification and the benefits of the system disappear.
Here is an idea of PLM Brain architecture.

What is my conclusion? The Next Platform War Is About Owning Product Understanding
The concept of the digital thread has gained significant attention in recent years as a way to connect systems across the product lifecycle. The digital thread provides traceability between requirements, design artifacts, manufacturing data, and service information.
This connectivity is important. Connecting systems allows organizations to follow links between records. It does not automatically produce the enterprise-level understanding needed to make complex product decisions.
What remains missing is the synthesis layer that can interpret these connections, preserve product memory, retrieve relevant context, and support trusted actions. Traceability alone does not create synthesis. I can see a strategic shift from workflows to product memory.
This synthesis layer is what I think of as the PLM Brain.
If such a system emerges, the competitive landscape of engineering software may change significantly. The next platform battle may not revolve around who owns the CAD file, the BOM structure, or the workflow engine. Instead it may revolve around who owns the enterprise-wide understanding of the product itself.
A PLM Brain would not replace existing systems overnight. CAD, PLM, ERP, MES, and other tools will continue to play essential roles. But a new architectural layer could emerge that synthesizes the knowledge across these systems and turns fragmented product information into coherent product understanding.
When that happens, the long-standing PLM holy grail may finally come into focus. It was never simply a single source of truth or a unified database.
If you’re interested in how product memory evolves into an AI-native engineering environment, I recommend reading my recent article about product memory and AI-native PLM architectures.
It was the creation of a stateful enterprise-wide product context capable of supporting how organizations actually design, build, and sustain complex products.
And that context is the foundation required to build a true PLM Brain.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.
