AI harness is an interesting topic. Over the past year, most discussions about AI have focused on models. Which model is better? GPT, Claude, Gemini, or something else? Benchmarks, reasoning scores, and token costs dominate the conversation.
But if you look a bit deeper, you will notice something interestingThe model itself is only the brain. What actually determines how useful AI becomes in real work is the environment around the model. This environment includes the systems it can access, the data it can see, the tools it can operate, and the memory it can accumulate over time. This environment is what many people are now calling the AI harness.
A recent example that created quite a bit of buzz is the open-source project OpenClaw. Many headlines framed it as a new AI system, but technically OpenClaw didn’t introduce a new model at all. Instead, it demonstrated something else: a structured environment where existing models could interact with tools, workflows, and data. In other words, OpenClaw was primarily a harness.
An AI harness defines how intelligence connects to real-world tasks. It determines whether AI simply answers questions or actually participates in workflows. In some systems, the harness allows AI to read files and documents. In others, it can operate tools, run processes, or analyze structured data. The harness controls context, permissions, and connections between systems, shaping how AI interacts with the surrounding environment. In practice, this means the harness often matters more than the model itself.
What Is an AI Harness?
Models perform differently depending on harnesses. The same model can produce dramatically different results depending on the harness where it runs. And as AI becomes more embedded in engineering and manufacturing systems, understanding the harness architecture may become one of the most important design decisions for future PLM platforms.
I came across an interesting video by Nate Jones discussing AI harnesses. In his experiments he compared AI models. GPT versus Claude. Benchmarks versus benchmarks. Token costs, reasoning quality, coding performance. He demonstrates that the model itself is only the brain. The real system where the work happens is the harness.
The harness is everything around the model. It defines where the AI runs, what tools it can use, what information it remembers, and how it interacts with real work. The harness determines whether the AI lives inside your environment or in a sandbox somewhere else. It determines whether it accumulates context over time or starts from scratch every session. It determines whether it can access tools, run workflows, or coordinate multiple tasks.
The most striking example in Nate’s article is a benchmark presented earlier this year. The same model, identical weights, identical training, produced dramatically different results depending on the harness where it ran. In one environment it scored around 78%, while in another it dropped to 42%.
This example immediately resonated with me because it highlights something we are beginning to see across the AI ecosystem. Models are improving quickly and converging in capability. But the systems built around them are diverging. Those systems are what ultimately determine whether AI becomes useful inside real workflows.
When I read Nate’s article, I immediately started thinking about engineering and PLM systems. Because if harness design already matters in coding environments, it matters even more in engineering environments where context is far more complex and decisions unfold across long product lifecycles.
And that leads to a much more interesting question. Instead of doing an excessive marketing for “AI-native PLM” and arguing about future PLM intelligence, we need to ask something else entirely:
Which AI harness should engineering build around to support AI adoption in engineering and manufacturing organizations?
Why AI Harness Architecture Matters for Engineering
In engineering, intelligence without context is rarely useful.
A language model can summarize a document, analyze a drawing, or answer a question about a specification. Those capabilities are helpful, but they do not automatically translate into engineering productivity. Real engineering work requires understanding structures, dependencies, revisions, and lifecycle relationships that extend far beyond any individual document.
Products are not static objects. They evolve over time. Components change. Suppliers shift. Requirements evolve. Manufacturing constraints appear. Service teams discover problems in the field. Engineering decisions propagate across the system in ways that are rarely obvious.
The harness determines whether AI can see that complexity or not.
If the harness only exposes files, the AI will read files. If the harness only exposes a single tool, the AI will optimize that tool. But if the harness exposes the relationships that define the product itself, AI can begin to reason about the product rather than simply talk about it.
That distinction is important because it defines the difference between AI assistants and AI-powered engineering intelligence.
Models alone do not provide that intelligence. The harness does.
Tool-Centric AI Harness in PLM Systems
If we look at how AI is currently being introduced into engineering software, most vendors have started with the easiest possible approach.
They embedded AI inside existing tools.
CAD systems now include design assistants. PLM systems offer chatbots that help search data or summarize documents. ERP systems provide AI suggestions inside procurement or planning modules. Every application is becoming “AI-enabled.”
This approach represents what we can call the tool-centric harness.
In this architecture, AI lives inside the boundaries of a single application. The assistant sees the information available in that application and helps users operate it more efficiently.
There is nothing wrong with this approach. In fact, it delivers quick wins. Engineers can ask questions about designs, find information faster, and automate small tasks that previously required manual effort.
But the limitation becomes obvious when we look at the broader engineering landscape.
Products do not exist inside one tool.
A product spans multiple systems: CAD environments, BOM management platforms, supplier databases, procurement systems, compliance tools, manufacturing planning environments, and service documentation. Each system captures a piece of the product’s lifecycle.
When AI lives inside a single tool, it only sees a fragment of that lifecycle. The CAD assistant knows about geometry. The PLM assistant knows about documents. The ERP assistant knows about suppliers. None of them understand the product as a whole.
The result is what I would call AI fragmentation. Every tool becomes smarter, but the intelligence remains local.
This is why the tool-centric harness is a useful starting point but not a complete solution for engineering AI.
File-Centric AI Harness for Engineering Data
Another emerging harness model is what we can describe as the file-centric approach.
Think about last 30 years of PDM evolution – we are still largely using files. In this architecture, AI operates primarily on files. It analyzes CAD exports, drawings, spreadsheets, specifications, PDFs, and other artifacts that engineers produce every day. Many AI tools today are built around this concept. They ingest large document collections and allow users to query them conversationally.
The file-centric harness is attractive because it matches how many engineering organizations already operate. Engineering knowledge often lives in files stored across shared drives, document repositories, and cloud storage systems. AI tools can quickly add value by helping engineers search, summarize, and analyze this information.
This is why file-centric AI is often the easiest path to adoption. Organizations do not need to change how they work. They simply connect AI to existing data sources and begin exploring what it can do.
However, files are not the same thing as product knowledge.
Files capture snapshots of information. They represent outputs of engineering work rather than the relationships that define the product. A drawing may show a component. A spreadsheet may list a bill of materials. A document may describe a requirement. But the deeper connections between these artifacts often remain implicit.
When AI operates only on files, it must reconstruct those relationships every time it answers a question. Sometimes it succeeds. Sometimes it guesses. The underlying structure of the product remains hidden.
File-centric harnesses therefore provide powerful assistance for reading and interpreting engineering artifacts, but they rarely capture the full context required for lifecycle decision-making.
Workflow-Centric AI Harness for Manufacturing Processes
A more sophisticated harness model emerges when AI is connected to engineering workflows.
In a workflow-centric harness, AI participates in the processes that govern engineering activities. Instead of simply reading documents, AI helps manage tasks such as engineering change requests, release approvals, compliance checks, procurement planning, and impact analysis.
Engineering organizations rely heavily on these processes to coordinate work across teams. Changes must be reviewed. Designs must be validated. Manufacturing implications must be considered. Suppliers must be involved. Workflow systems provide the structure that ensures these activities happen in a controlled and traceable way.
Connecting AI to workflows allows the system to observe and assist with these processes. AI can help identify dependencies between changes, highlight missing approvals, or analyze potential impacts across departments.
This harness model is stronger than file-centric approaches because it connects intelligence to the operational structure of engineering work.
However, workflow-centric systems still rely heavily on the quality of the underlying data. If the product structure is fragmented across systems or poorly connected, workflows simply move incomplete information through the process.
In other words, workflows improve coordination, but they do not necessarily solve the deeper challenge of product understanding.
Product Memory Harness: The Future of Engineering AI
The most promising harness model for engineering is what I previously described as the product memory-centric approach.
In this architecture, AI operates on a persistent representation of the product and its lifecycle relationships. Instead of working on isolated files or individual workflows, AI interacts with a connected model of the product that captures structures, revisions, dependencies, and decisions over time.
Product memory includes elements such as product structures, multi-view bills of materials, revision histories, supplier relationships, requirements, documents, manufacturing views, and lifecycle states. These elements form a connected network describing what the product is and how it evolves.
When AI operates on this type of structure, something important changes.
It no longer needs to infer relationships between artifacts because those relationships already exist. The system understands how components connect, which revisions belong to which assemblies, how suppliers relate to parts, and how changes propagate through the system.
This allows AI to support much deeper forms of reasoning.
Instead of answering isolated questions, it can help engineers understand consequences.
What components will be affected by a design change?
Which suppliers are exposed to a revision update?
How will a modification impact manufacturing configurations?
Which requirements depend on a particular subsystem?
These types of questions require understanding the product as a system. Product memory provides the harness that makes this possible.
Choosing the Right AI Harness for PLM
If we step back and look at the progression of these harness models, a pattern begins to emerge.
The file-centric harness is the easiest to adopt because it aligns with how many engineering teams already store information today. AI can immediately help interpret documents and drawings without requiring major organizational changes.
The workflow-centric harness provides stronger value because it connects AI to the processes that govern engineering work. This allows organizations to improve coordination and visibility across lifecycle activities.
The product memory-centric harness, however, offers the most transformative potential. By grounding AI in a persistent representation of the product and its lifecycle relationships, organizations create an environment where AI can truly understand the system engineers are building.
This is where AI moves beyond being a conversational interface and becomes an operational layer supporting engineering decisions.
What is my conclusion?
What is the future and the role of AI harnesses in engineering and PLM. Think about the last 30-40 years of PLM development- the tech (database) evolved and enabled many capabilities of PLM. But once everyone was aligned with SQL, what differentiated all PDM/PLM was the “harness” around it. Then cloud tech innovation came in (which introduced a new type of innovation, but many existing PLM applications and platforms just use it “someone else computer” by hosting whatever existed 30+ years in a new “harness”.
The CAD and PLM industry is still at the beginning of this AI journey. While I can hear many questions about AI in engineering and PLM, I found the question about AI harness largely missing. At the same time, I believe it is one of the most important (and not the question about how to build AI-native PLM).
Most organizations experimenting with AI today are naturally starting with a tool-centric harness. We can see it with all PLM products today added an AI-chatboot and some elements of AI. Position themselves as a “PLM intelligence” those harnesses are siloed by the nature of PLM tools.
File-centric is an interesting one. This model is still easy to deploy and potentially can provides quick wins. There are many files in the engineering environment these days and it can be a low hanging fruit for vendors. One important element of file-centric harness is that it allows teams to experiment with AI capabilities without redesigning their entire engineering infrastructure.
The natural progression is a workflow-centric harness. Still aligned with the current engineering and manufacturing workflows, these types of harness can be a winner to optimize collaboration.
But over time, the limitations of these approaches become more visible. As organizations attempt to scale AI across engineering, manufacturing, and supply chain processes, the need for deeper context becomes unavoidable.
That context lives in product memory. The organizations that build their AI harness around product memory will be able to connect intelligence directly to the product lifecycle. AI will not only analyze documents or assist in tools; it will help engineers navigate complex product relationships and lifecycle decisions.
And in the long run, that difference will matter far more than which model happened to perform slightly better in the latest benchmark.
Because in engineering, intelligence is not just about answering questions.
It is about understanding the product.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.
