The Leaked Code That Revealed the Next Lock-In Era
A few weeks ago, a packaging error pushed half a million lines of Anthropic’s internal code to a public registry. The code was taken down quickly, but not before developers had already pulled it apart. Most of the attention went to Claude Code itself — the security implications, the architectural choices, the modified forks spreading across the internet. But buried in that source code was something more interesting: an unannounced, always-on agent environment called Conway.
Conway is not a chat window. It is a persistent agent environment with its own extension format, browser control, connections to external tools, and the ability to be triggered by outside events. It watches how you work. It learns what you prioritize. Over time, it builds a model of how you operate — which messages matter, which decisions you make, which patterns repeat across your work. And it does this continuously, across every session, not just when you ask it something.
What struck me about Conway was not the technical architecture. It was the lock-in model underneath it.
Every previous generation of enterprise software locked customers in through something visible: a proprietary file format, a database schema, a workflow engine, a vault. You could point to it. You could measure the switching cost. You could, in principle, migrate away from it if the pain was high enough. Conway represents something different. The switching cost is not your data. It is the accumulated model of how you work — the behavioral context, the learned patterns, the organizational memory the agent has built by watching thousands of interactions over months. That does not export. There is no migration path for six months of compounding context. There is no CSV of how a person thinks.
That observation points at something much larger than Anthropic’s product strategy. It points at where enterprise AI lock-in is going — and why the industries with the most complex, long-lived, fragmented product knowledge are the most exposed. Industries like engineering and manufacturing. Industries built on PLM.
This is not a story about Conway. Conway is just the clearest early signal of a shift that has been building for a while. The real story is about what lock-in means when the asset being controlled is no longer a file or a schema, but the memory of how an organization thinks, decides, and builds products.
The CAD and PLM Lock-In Model We Grew Up With
If you have spent any time in the CAD and PLM industry, you know this story by heart. Enterprise software vendors have always understood that the real product is not the software itself. It is the switching cost.
The first generation of lock-in was about file formats. CAD vendors built proprietary geometry kernels and file formats that made interoperability painful by design. A CATIA file was not a SolidWorks file. A Pro/E model was not an NX model. You could attempt translation, and you would lose associativity, features, design history, and sometimes geometry itself. The file format was not just a technical choice. It was a business strategy. Once your engineering team had years of models in one format, the cost of leaving was measured not in license fees but in design data.
PDM systems deepened that lock by moving up the stack. It was no longer just the file. It was the vault, the revision structure, the metadata schema, the access control model, the check-in and check-out workflows that teams built their daily habits around. Migrating out of a PDM system meant not only converting files but reconstructing years of revision history, re-mapping relationships, and rebuilding integrations that had quietly become load-bearing infrastructure.
PLM took it further still. The switching cost moved from files and vaults to process itself. Lifecycle states, workflow definitions, change management logic, BOM structures, supplier relationships, configuration rules — all of it encoded inside a single vendor’s data model. The longer a company ran on a PLM platform, the more its operational logic became inseparable from that platform’s specific way of representing products and processes. At that point, migration was not a project. It was a multi-year program with serious business risk attached.
ERP completed the picture. Once procurement, finance, manufacturing, and supply chain were running through a single transaction model, the integration surface became so wide that changing any major system meant touching all the others. The switching cost was no longer about data or files at all. It was about organizational risk.
This is the world PLM professionals have navigated for three decades. It is a world where lock-in is real, visible, and well understood. You can point to it in a contract negotiation. You can quantify it in a migration estimate. You can build an architecture review around it.
What is coming next is harder to see, harder to measure, and potentially far more durable. Because the next lock-in is not about what the system stores. It is about what the system learns.
What CAD, PDM, and PLM Systems Never Captured: Engineering Judgment
For all the depth of lock-in that CAD, PDM, PLM, and ERP created, there was always a layer those systems could not reach. They captured what engineers produced. They never captured how engineers thought.
Consider what actually happens when an experienced engineer designs a product. The geometry that ends up in the CAD file is the final answer to a long sequence of decisions that are almost entirely invisible to the system. Why was this wall thickness chosen and not a thinner one? Because someone remembered a field failure from eight years ago on a similar design. Why did the team reject the more elegant configuration that showed up in the concept review? Because one engineer knew from experience that this particular supplier could not hold that tolerance consistently under production conditions. Why does this assembly have an unusual amount of clearance built into it? Because the person who designed it had lived through a thermal expansion problem on a previous program and was not going to make that mistake twice.
None of that is in the CAD file. None of it is in the PDM vault. Some of it might survive in a change request comment or a buried email thread, but in practice most of it lives in one place: the memory of the people who were in the room.
This is the knowledge that traditional enterprise systems were never designed to capture, and in many ways were never even trying to capture. Their job was to store the output of engineering work in a structured, retrievable, auditable form. That is genuinely valuable. But it is a different thing from preserving the reasoning behind the output.
The consequences of this gap show up most clearly when people leave. A senior engineer who retires or moves to a competitor does not just take their login credentials with them. They take years of accumulated judgment — the pattern recognition built from hundreds of decisions, the intuition about where risk hides, the memory of what was tried and abandoned and why. The CAD files stay behind. The engineering knowledge that made those files trustworthy often does not.
Companies have tried to address this in various ways. Documentation requirements, design rationale fields, lessons learned databases, knowledge management initiatives. Most of these efforts produce archives that nobody reads, because the knowledge they try to capture is contextual and relational in ways that structured fields cannot hold. You cannot write a database entry for the reason a bracket was overbuilt because the engineer who designed it had seen three similar designs fail in the field and did not trust the simulation to catch everything. That knowledge lives in a person, connected to a memory, shaped by an experience.
For thirty years, this was simply accepted as the nature of engineering work. The most valuable knowledge was personal. It walked in the door every morning and it could walk out the door permanently at any time. Software captured the artifacts. People carried the judgment.
That boundary is now beginning to move.
From CAD Format Lock-In to AI Decision Lock-In: The Intelligence Layer
The boundary between what systems capture and what people carry is shifting because of a new kind of software layer that did not exist in the PLM world until recently. Not a better database. Not a smarter search. Something structurally different: a persistent AI memory layer that accumulates context around work as it happens, and uses that context to build an increasingly detailed model of how decisions get made.
This is what Conway represents at the consumer level. But the same pattern is emerging across enterprise AI platforms more broadly. The systems being built today are not just storing records of what happened. They are watching how work unfolds — what alternatives were considered, what was rejected, what constraints shaped the outcome, how people responded when something went wrong — and they are retaining that context across time in ways that traditional systems never attempted.
In engineering and manufacturing, this matters more than in almost any other domain. Product decisions are not isolated events. They are part of long chains of reasoning that connect design intent to manufacturing constraints to supplier capabilities to service requirements to lessons learned from the field. An experienced engineer navigating a complex design problem is drawing on all of that simultaneously, mostly without being aware of it. The judgment looks intuitive because it has been internalized over years of exposure to exactly these kinds of tradeoffs.
A product memory layer does not replicate that judgment. But it can begin to capture the traces of it. It can observe that this team consistently adds margin when working with this supplier. It can register that designs involving this type of geometry tend to generate change requests at a specific stage of development. It can retain the context of why a particular configuration was rejected two years ago when a similar configuration comes up for consideration today. It can surface the pattern without the engineer having to remember it explicitly.
This is a different kind of value from anything that CAD, PDM, or PLM has offered before. And it creates a different kind of enterprise AI lock-in.
The old lock-in was about controlling the artifact. A proprietary file format means your geometry is hard to move. A proprietary data model means your product structure is hard to migrate. These are real switching costs, but they are fundamentally about data portability. Given enough time, money, and effort, the data can be moved. Translation tools exist. Migration specialists exist. The problem is painful but tractable.
The new lock-in is about controlling the accumulated intelligence around the artifact. You can move your CAD files to a new platform. You can migrate your BOM structure and your change history. But you cannot easily move six years of observed decision patterns, learned tradeoffs, and contextual memory that a persistent AI layer has built by watching how your engineering organization actually works. That intelligence is not stored in a format that can be exported and imported somewhere else. It is embedded in the system that generated it, and it degrades significantly the moment you leave.
CAD lock-in made it hard to move the design. AI memory lock-in may make it hard to move the intelligence around the design.
That distinction is worth sitting with, because it changes the nature of the strategic question companies need to be asking. For thirty years the question was: who owns my data? The question now is: who owns the learned model of how my organization makes decisions? Those are very different questions, and most enterprise software contracts were written to answer only the first one.
The Two-Sided Lock: When Engineers Become Dependent on the Memory Layer
There is a dimension to this shift that goes beyond vendor strategy and enterprise architecture. It is more personal than that, and in some ways more uncomfortable.
For most of the history of engineering work, experience was fully portable. An engineer who spent ten years at one company and then moved to another took everything with them. The skills, the intuition, the pattern recognition built across hundreds of design decisions — all of it traveled in their head. The company lost something real when that person left. But the person lost nothing. Their capability was their own.
The AI memory layer changes that balance in a way we have not seen before.
Imagine an engineer who has worked inside a memory-rich environment for a decade. The system has been watching how they work since the beginning. It knows which manufacturing constraints they always check first. It remembers the supplier qualification issues they navigated three years ago and surfaces that context automatically when a similar situation arises. It retains the reasoning behind design choices that the engineer themselves might only half-remember. It connects current problems to historical outcomes across programs that the engineer worked on years apart. It has, in a meaningful sense, become an externalized layer of that engineer’s professional memory.
Working inside that environment, the engineer is not just competent. They are operating at a level that would be very difficult to reach without the memory substrate beneath them. The system is not doing their thinking. But it is continuously giving back accumulated context that makes their thinking faster, better informed, and more connected to the full history of the organization’s product experience. The person trains the environment, and the environment strengthens the person. That feedback loop compounds over time.
Now consider what happens when that engineer decides to leave.
They take their talent with them. They take their skills, their intuition, their years of hard-won judgment. All of that is genuinely theirs and genuinely portable. What they leave behind is the memory substrate that has been amplifying that talent inside this specific environment. At the new company, they start without it. Not because they have forgotten anything, but because the context that made them exceptionally effective here does not exist there. They are, in a real sense, more capable inside this environment than they will be anywhere else — at least until a new memory layer has had years to build around their work at the next place.
This is a new kind of lock-in, and it operates at the individual level rather than the organizational level. It does not coerce anyone. Nobody is prevented from leaving. But the relative value of staying increases over time in a way that has no real precedent in the history of enterprise software. Previous generations of platform lock-in created switching costs for companies. This one may also create switching costs for people.
The implications are worth thinking through carefully. For companies, a memory layer that captures decision intelligence represents genuine protection against knowledge loss. The problem of the retiring senior engineer taking thirty years of judgment out the door becomes at least partially addressable if the traces of that judgment have been accumulating in a persistent system over time. That is a real benefit and a reasonable thing to want.
But the same dynamic that protects the company also changes the nature of the employment relationship in ways that are not yet well understood. If the most productive version of an engineer exists only inside one company’s memory environment, what does that mean for how people think about mobility, career development, and the ownership of their own professional experience? These are not hypothetical questions. They are the natural consequence of building systems that intertwine individual effectiveness with institutional memory.
The question of who owns the captured traces of a person’s professional judgment — the company that provided the environment, the vendor that built the platform, or the individual whose decisions generated that knowledge — does not have a clear answer yet. But it is exactly the kind of question that tends to become very important once the technology is already widely deployed and the patterns are already locked in.
Why Engineering and Manufacturing Face the Greatest AI Lock-In Risk
Every industry will feel the effects of memory-based lock-in to some degree. But engineering and manufacturing face this shift with a particular vulnerability, for reasons that are structural rather than accidental.
The core problem is that product knowledge in these industries has always been extraordinarily fragmented. A single product — even a moderately complex one — generates knowledge that is distributed across CAD files, BOMs, requirements documents, change orders, supplier qualification records, quality event logs, manufacturing process sheets, service bulletins, ERP transactions, email threads, meeting notes, and the heads of the people who were involved in decisions that never made it into any system at all. Each of those repositories was built for a different purpose, owned by a different team, and structured around a different logic. None of them was designed to talk to the others in any deep sense.
PLM was supposed to address this. The promise of PLM, at its most ambitious, was a single connected thread of product knowledge spanning the entire lifecycle — from concept through design, manufacturing, service, and end of life. The Digital Thread. The Single Source of Truth. These were genuine aspirations, and PLM has delivered real value in moving toward them. But after decades of effort, the fragmentation has not been solved. It has been managed. The knowledge still lives in silos. The connections between silos are still largely manual, brittle, or incomplete.
This is exactly the landscape where an AI product memory layer becomes most powerful — and most strategically significant.
A persistent AI layer operating across engineering, procurement, manufacturing, and quality does not need the fragmentation problem to be solved before it can add value. It can work with the fragmentation as it exists, observing patterns across systems that were never formally integrated, building context from the intersections that no structured database was designed to capture. It sees that designs with a certain geometric characteristic tend to generate supplier non-conformances at a specific stage. It notices that change requests originating from one product line follow a different approval pattern than those from another, and that the difference correlates with outcomes. It retains the history of decisions made at the boundary between engineering and manufacturing where formal systems have always been weakest.
Over time, that accumulated cross-domain context becomes the most complete representation of how the organization actually builds products — more complete than any individual system, more connected than any integration layer, and more reflective of the real decision-making process than any formal workflow. It becomes, in effect, the organizational memory that PLM always aspired to be but could never quite reach, because the knowledge it needed to capture was too contextual, too relational, and too human to fit into structured fields.
That is a profound shift. And it means that the vendor or platform that owns the product memory layer in a manufacturing enterprise may eventually hold more strategic leverage than the CAD vendor, the PLM vendor, and the ERP vendor combined. Not because their software is better in any conventional sense, but because they own the accumulated intelligence that makes all the other software interpretable and useful.
There is an additional factor that makes engineering and manufacturing uniquely exposed. Product lifecycles in these industries are long. A commercial aircraft program runs for decades. An industrial equipment platform may be in production for thirty years and in service for fifty. The decisions made in the early stages of a program cast shadows across the entire lifecycle, and the reasoning behind those decisions becomes more valuable, not less, as time passes and the people who made them retire or move on. A product memory layer that has been accumulating context across a program of that duration is not just a useful tool. It is an irreplaceable institutional asset — and an irreplaceable source of leverage for whoever controls it.
This is why the question of who owns the product memory layer matters more in engineering and manufacturing than almost anywhere else. The stakes are higher, the timescales are longer, the knowledge is more fragmented to begin with, and the consequences of losing accumulated decision context are more severe. The industry has spent thirty years trying to solve the knowledge fragmentation problem through better data models and tighter integrations. The memory layer may be the first approach that actually reaches the layer where the most valuable knowledge lives. But reaching that layer also means that whoever controls it controls something genuinely new: not just a record of what was built, but a living model of how an organization learned to build it.
Conclusion: Who Owns the AI Memory Layer?
This is the most strategic question for the next decade CAD and PLM.
For thirty years, the strategic question in enterprise software was about data. Who stores it, who controls access to it, who can export it, and what happens to it when you change vendors. That question produced decades of contract negotiations, data portability clauses, migration projects, and architecture reviews. It is still a real question and still worth asking. But it is no longer the most important one.
The more important question for the next decade is about memory. Who owns the accumulated model of how your organization thinks, decides, and builds products? Who controls the layer that captures the traces of engineering judgment, the patterns of decision-making, the cross-domain context that forms across years of real product work? And what are the terms under which that memory can be accessed, exported, or taken somewhere else?
These questions do not have good answers yet, because the category is new enough that most enterprise software contracts were written before anyone understood what needed to be negotiated. The data portability frameworks that exist were designed for structured records — files, transactions, metadata. They were not designed for the derived intelligence that a persistent AI layer builds by watching how an organization operates over time. That intelligence is not a file. It is not a database table. It does not have an obvious export format, and there is no established standard for what portability even means in this context.
This will need to change. Companies deploying AI-native platforms across their engineering and manufacturing operations should be asking their vendors specific questions before deployment, not after. What exactly is being retained by the memory layer? In what form does that memory exist? Under what conditions can it be exported? What happens to it if the contract ends? Who owns the derived intelligence that the system generates by observing how your engineers work? These are not edge case concerns for the legal team to worry about later. They are the central strategic questions of deploying this class of technology.
For the CAD and PLM industry specifically, the arrival of the AI memory layer raises a question about identity and positioning that the major vendors will need to answer. The traditional PLM value proposition was built around being the system of record — the authoritative source of product data across the lifecycle. If a memory layer operated by a different vendor, possibly an AI-native one with no heritage in product development, becomes the place where the most valuable product intelligence accumulates, then the system of record becomes a less central asset than it once was. The data is still there. But the intelligence that makes the data interpretable, contextual, and connected to real decision history lives somewhere else.
The CAD vendors, the PLM vendors, and the emerging AI platform vendors are all aware of this dynamic, even if most of them are not yet talking about it publicly in these terms. The race to own the memory layer in engineering and manufacturing has already started. It is just not yet visible in the way that the format wars of the 1990s or the cloud migration battles of the 2010s were visible, because the asset being competed for is harder to see and harder to measure than a file format or a data center.
What is clear is that the companies on the receiving end of this shift — the manufacturers, the engineering organizations, the product companies that will be running these systems — need to think about the AI memory layer with the same strategic seriousness they once brought to ERP selection or PLM consolidation. The consequences of getting it wrong are at least as significant, and potentially more durable, because what is being locked in this time is not a workflow or a data model. It is the organizational intelligence that took decades to build.
CAD locked your geometry. PLM locked your process. The AI memory layer may lock something closer to your organizational mind.
That is worth paying attention to now, while the architecture decisions are still being made and the contracts are still being written. Because if the history of enterprise software has taught us anything, it is that the lock-in you did not see coming is always the one that costs the most to undo.
