The last two years have reshaped the language of software development with the introduction of AI into our lives. It was ChatGPT that created the acceleration moment, triggering a waterfall-like surge in AI technologies and companies.
It comes to PLM too. Almost every major vendor now markets AI capabilities and features and almost every startup says that their product is “AI-native” or “AI-enabled”. Dassault Systèmes highlights virtual companions and generative assistants. Siemens promotes copilots that navigate BOMs and product structures. PTC introduces agentic AI across the digital thread. Aras connects its PLM platform to Azure OpenAI and builds conversational search into Innovator. It is virtually impossible to ignore the “AI race”. These announcements coming from PLM vendors recognize the growing importance of intelligence and automation in engineering and manufacturing workflows. But who will deliver it? What will change the current PLM landscape, and maybe challenge decades old existence of PLM vendor dominance.
In my yesterday article I discussed the critical elements of building PLM AI Agents. Checked it here – Building PLM Agents: Why Everyone Is Announcing AI and Why Almost Everyone Is Missing the Point.
I spent this weekend reviewing what four key dominant vendors are doing, what they announced as available for AI implementation, available demos, visionary articles, and what value customers can realistically expect. I checked their article, videos, and the broader architecture behind each offering. While there are many article and announcements, those are not applications in the style you can “register and try”. Here is a conclusion I came to. The current wave of AI in PLM claims to improve usability, shortens the time needed to find answers and accelerates certain tasks inside each vendor’s platform. These improvements do matter. They deliver incremental productivity gains. They most probably remove friction in places where users previously struggled – complexity and search.
However, none of these systems address the deeper industry problem that has existed for decades. Product development is a distributed activity conducted across multiple systems, tools and companies. Data moves across PLM, ERP, MES, ALM, QMS, supplier portals and countless spreadsheets and documents. Decisions depend on information spread across these environments. The real bottleneck is the fragmentation of product knowledge across systems that do not understand each other. AI in its current form stays inside single PLM platforms and inherits their architectural boundaries. It solves the problem of accessing and interpreting data inside one system but does not engage with the complexity of distributed product information. As a result, the value remains incremental rather than transformative.
This article examines each vendor’s AI strategy, identifies what has changed compared to pre-AI PLM, and shows the measurable outcomes customers can expect. It also describes what I learned while comparing these approaches and why the biggest opportunity remains unaddressed.
Dassault Systèmes: AI-Driven Assistance Inside the Virtual Twin
Dassault Systèmes introduced AURA and a series of intelligent features embedded in 3DEXPERIENCE. AURA serves as a virtual companion that synthesizes information from PLM data, documents and community content. It can summarize discussions in 3DSwym, interpret documentation and provide guidance during design tasks. Dassault also added AI functionality to automate drawing generation, recognize fasteners in assemblies and generate intelligent annotations.
These capabilities help users work faster in environments that contain large amounts of historical content. Many Dassault customers have decades of CAD models, discussion threads, requirements documents and simulation artifacts. AURA brings this information into a conversational layer that understands context. It can reduce the time required to search across repositories and interpret content. The CAD-focused AI features also improve repetitive modeling tasks. These improvements are welcome for engineers who spend long hours performing detailed work.
Before AI, Dassault already offered comprehensive search and knowledge management capabilities, but they were more traditional faceted search rather than natural language queries. AURA introduces a more intuitive interface and automates parts of the interpretation step. The underlying data remains the same, but the path to insights becomes faster.
The measurable outcomes are clear. Engineers save time, reduce rework caused by overlooked information and improve consistency in documentation. Regulatory industries gain earlier visibility into compliance constraints. These results might justify AI pricing models associated to design productivity and knowledge assistance. However, the same “productivity gains” were advertised by DS and other PLM vendors before. Now it will be better “a-la ChatGPT” style
While studying Dassault’s approach, I learned that its strengths come from the scale and integration of its platform. AURA will work well because it has access to a vast and structured data environment inside 3DEXPERIENCE. The limitation is inherent in this same architecture. The intelligence remains confined to Dassault’s ecosystem. It might not reach data stored in ERP systems, supplier systems, legacy repositories or external engineering tools. AURA improves the quality of work inside one platform but does not contribute to solving the cross-system fragmentation that dominates real-world product development.
Siemens PLM: Copilots that Simplify Access to Complex Product Structures
Siemens introduced Teamcenter BOM Copilot to address a persistent challenge in PLM. Engineers often struggle to navigate complex product structures, configuration models and large collections of documents. Teamcenter Copilot provides conversational access to this information. It can retrieve specifications, summarize documents, traverse BOM relationships and create working contexts based on natural language instructions. Siemens also introduced Design Copilot for NX, which helps users discover relevant features and best practices. Combined with image-to-part search capabilities, Siemens positions AI as a tool that enables easier discovery and reuse.
These capabilities address real pain points in large organizations. Teamcenter has always contained a rich representation of products, but usability often depended on expert knowledge. The new copilot layer reduces this dependency. It helps less experienced users access data, understand relationships and prepare product views for downstream tasks.
Compared to earlier Teamcenter experiences (eg. Active Workspace), this is might be a significant usability improvement. Before AI, engineers use search queries and navigated intricate trees of product relationships. Now they can ask natural questions and receive coherent summaries. This shift improves time-to-information and reduces onboarding effort.
The measurable outcomes include faster retrieval of data, increased reuse of existing parts, reduced dependency on PLM specialists and improved consistency in the preparation of manufacturing and service contexts. Siemens also differentiates itself by offering multi-cloud deployment options and on-premise model hosting, which is important for industries with strict security requirements.
What I learned from Siemens’ approach is that it is focused, practical and grounded in helping users navigate the data Siemens already manages well. The copilots do not attempt to extend beyond Teamcenter. They do not ingest data from other PLM tools or external systems that influence product decisions. They do not address the challenges created when engineering, manufacturing and supply chain systems struggle to exchange information. The improvements remain confined to the boundaries of one platform.
PTC: Agentic AI Across the Digital Thread
PTC presents the most ambitious AI narrative among the major PLM vendors. It positions its AI strategy around agentic intelligence that spans multiple systems in the product lifecycle. PTC describes a progression from guidance to assistance and, eventually, autonomous execution of workflows. It aims to combine data from Windchill, Creo, Codebeamer, and ServiceMax into a unified product data foundation that supports intelligent behavior.
The first concrete example is the Document Vault AI agent in Windchill. It extracts information from specifications, quality documents, test reports and related data. For companies that depend heavily on documentation, this feature reduces the time required to interpret complex content. PTC also highlights emerging agent use cases in ALM and service management, where AI can assist with requirement alignment or service diagnostics.
I can see why PTC is focusing on the story. While Creo and Windchill are well connected for decades, Codebeamer and ServiceMax are newly acquired products and each has a separate data plaforms. PTC aims to let agents interpret unstructured content in multiple systems and then analyze, advise and execute orchestrated workflows. This creates potential for higher levels of automation in engineering change management, compliance processes and service operations. But all these things are only in the planning. Currently, you can see Windchill Vault Co-pilot to help document discovery.
The measurable outcomes include reduced manual review cycles, improved quality consistency and shorter turnaround times for change requests. If PTC succeeds in developing autonomous workflow capabilities, it could reduce operational friction across the lifecycle.
After examining PTC’s approach, I learned that its vision is stronger than its current capabilities. The idea of agents acting across domains is compelling, but the agents still depend on PTC’s internal data structures. Semantic connectivity refers to relationships inside PTC tools. The AI does not interpret data held in ERP, MES, supplier portals or competitor PLM systems. The result remains a closed loop within the PTC ecosystem. It provides value where PTC has strong presence but cannot address the broader fragmentation of data across the supply chain.
Aras: Azure-Native Intelligence for a Flexible PLM Model
Aras positions its AI features as an extension of its flexible model-based architecture. AI-Assisted Search and the Aras Intelligent Assistant allow users to interact with Innovator using natural language. These features rely on Azure OpenAI and integrate into the Microsoft ecosystem. Aras emphasizes extensibility through Copilot Studio and Microsoft Fabric.
The value proposition is centered on accessibility. Innovator gives companies the flexibility to design their own data models and workflows, but this flexibility potentially can introduces complexity for casual users. The conversational interface reduces the learning curve by interpreting user intent and retrieving relevant objects or documentation. The integration with Azure services also appeals to IT organizations that prefer standardized cloud infrastructure.
Compared to the earlier Innovator user experience, the improvement is meaningful. Users who previously struggled with the intricacies of the data model can interact more intuitively. The gains are incremental but useful for organizations that rely on Innovator as a daily operational tool.
The measurable outcomes include faster access to information, reduced support requests and better adoption among occasional users. Aras can monetize these improvements as AI modules attached to its SaaS offering.
What I learned is that Aras maintains its architectural philosophy. AI is treated as a modular service within a broader technology stack. The limitation is that the AI does not extend beyond the Innovator environment. It interacts only with the data Aras manages. Like the other vendors, it does not influence upstream or downstream systems that participate in product decisions.
Comparison Table To Summarize PLM AI
To frame the rest of the discussion, it is helpful to compare the four major vendors side by side. Each presents its AI strategy differently, yet the patterns become clear when the offerings are distilled into their core focus, underlying technology direction and the unique angle each vendor uses to differentiate itself from the others. The table below summarizes these points at a high level and sets the stage for a deeper examination of their capabilities.

What All Vendors Have in Common: AI That Stays Inside Their Walls
Across all four vendors the AI strategies share a consistent pattern. Each vendor uses AI to help users navigate the complexity of data stored in its own platform. Dassault Systemes focuses on CAD productivity and internal knowledge extraction. Siemens PLM optimizes access to PLM BOM structures and associated documentation. PTC provides document understanding and early workflow automation inside its digital thread. Aras offers conversational search over its flexible data model.
The improvements are real and meaningful. They potentially to reduce friction, accelerate workflows and reveal insights that previously required significant manual effort. They also follow a similar structure. The AI interprets the data the vendor already controls. It operates within the boundaries of a single semantic environment. It does not cross into external systems that participate in product lifecycle processes.
The measurable outcomes are familiar. Faster engineering cycles, improved consistency, increased reuse, shorter onboarding time and more efficient documentation handling. These results align with value propositions PLM vendors have promoted for years. AI enhances the path to these outcomes but does not introduce fundamentally new categories of business value.
The Real Problem Remains: Data Complexity Across a Distributed Industry
The biggest challenge in product development today is the distribution of product data across multiple systems and organizations. Companies rely on assemblies of tools that store overlapping and interconnected information. PLM, ERP, MES, QMS, ALM and supplier systems each contain parts of the product truth. None of the AI capabilities presented by PLM vendors operate across this distributed environment. As a result, they cannot address the most costly and persistent sources of engineering risk.
Consider the relationship between engineering and manufacturing. Many companies maintain EBOMs in PLM and MBOMs in ERP and BOP in MES. Engineering updates a revision in the EBOM. Manufacturing workflows continue operating with outdated MBOM / BOP structures (I wrote about it in my earlier article – From PLM to xLM. A routing step references an earlier version of a part. A supplier order uses an obsolete component. The discrepancy often surfaces only after materials arrive on the shop floor or quality inspections detect inconsistencies. An AI system confined to PLM does not have visibility into the manufacturing environment. It cannot detect that the change was not propagated or that downstream processes are misaligned.
Early-stage risk mitigation suffers from similar issues. During concept development engineers make decisions about components, materials and architectures without full visibility into supplier capabilities, manufacturing constraints or compliance requirements. If a chosen part has a long lead time in the procurement system or if a material violates regulatory constraints stored in a compliance database, the conflict emerges late and causes avoidable delays. AI inside PLM cannot detect these patterns because the information resides in other systems.
Cross-domain complexity introduces additional challenges. Products that combine mechanical, electrical and software elements produce data across MCAD, ECAD, ALM and simulation environments. A change in a PCB design may influence thermal behavior in mechanical enclosures. A firmware requirement may depend on hardware timing characteristics. Without a well integrated multi-disciplinary data that understands relationships across these domains, inconsistencies remain hidden until integration phases. Current PLM AI features only focus on a data managed by PLM system and if ECAD data exists in a separate vault, it won’t be analyzed unless they are imported into the vendor’s PLM environment.
Supply chain fragmentation increases the scale of the challenge. Most manufacturers depend on networks of suppliers and contract manufacturers. Each partner uses its own tools. Engineering changes propagate through email attachments or PDF documentation. When suppliers update material specifications or manufacturing limitations, the information often remains siloed. AI in PLM does not ingest supplier data automatically. It cannot reconcile design changes with real supplier constraints.
The same structural issue emerges during new product introduction. Engineering releases an EBOM to manufacturing. The manufacturing team generates an MBOM and defines routing instructions. When engineering updates a part later, the change often fails to propagate to the MBOM or to routing definitions in MES. Production teams remain unaware of the update. AI restricted to PLM database cannot detect this divergence because the manufacturing data is not accessible.
Risk mitigation also remains incomplete without cross-system intelligence. Potential issues such as incompatible materials, capacity bottlenecks, regulatory restrictions, quality trends and service field failures require data from many systems. None of the PLM AI features described by vendors can correlate these signals because they do not operate across systems.
These examples highlight the same underlying issue. Product development is a distributed digital ecosystem, not a single database. The information needed to make correct decisions spans many organizations. AI built inside one PLM system cannot interpret relationships that live beyond its own boundaries. It cannot reason about the larger digital thread because the thread itself is an abstraction that crosses disconnected architectures. The AI can improve interpretation of information inside its local environment but cannot address the structural fragmentation that causes delays, rework and systemic risk.
The digital thread was originally envisioned as a consistent, connected representation of product information across the lifecycle. In practice, each vendor maintains its own version of the thread. These threads do not interconnect. A true digital thread would traverse PLM, ERP, MES, QMS, ALM, ECAD, simulation tools and supplier systems. It would reconcile differences in data models and map relationships across tools. None of the current AI approaches attempt to build such a shared understanding (based on the information i’ve seen and maybe with some exception of PTC ideas, but they are limited to their own PTC systems).
For this reason, AI in PLM delivers incremental gains but not structural transformation. It improves efficiency inside the vendor’s platform, but it does not solve the complexity of distributed product information across multiple companies. Until AI can operate across systems with different data models and across organizations with different processes, the industry will continue to rely on manual reconciliation, spreadsheets, email and tribal knowledge.
What I Learned Through This Analysis
My primary learning is that AI in PLM today is not addressing the core architectural limitations of PLM. Vendors present AI as a path to intelligence, but the intelligence remains tightly coupled to their internal data structures. The improvements are meaningful but do not extend the scope of what PLM can achieve.
I also think that PLM data complexity remains the central obstacle. The models are rich and detailed, but they isolate knowledge rather than federate it. AI provides an interface that helps users manage this complexity. It does not simplify the data models or create a unified representation across tools.
I think that measurable outcomes are achievable. Companies will see productivity gains, shorter onboarding times, less rework and better interpretation of existing data. These results justify AI investment at the level of individual departments but do not change the nature of cross-company collaboration.
Finally, I think that the biggest opportunity in PLM remains unaddressed. The future requires product intelligence that spans systems, companies and domains. It requires a product memory that unifies information without forcing a single system to control it. It requires workflows that adapt dynamically to changes across the lifecycle, not only within one platform.
None of the vendors are moving in this direction yet. They enhance their platforms and enrich their tools with AI, but they do not break the architectural constraints that limit PLM. The potential for real transformation lies beyond the boundaries of individual systems.
Conclusion: AI in PLM Improves Access to Data but Does Not Solve the Hard Problem
AI has brought visible improvements to PLM platforms. It accelerates navigation, supports interpretation of documents, enhances CAD productivity and reduces friction for everyday tasks. These improvements matter. They deliver measurable value for engineering and manufacturing teams. Vendors will monetize this value through AI subscriptions, workflow accelerators and productivity copilots.
The fundamental problem, however, remains unsolved. Product development is a distributed, multi-system and multi-company process. The information required to make correct decisions is scattered across tools and organizations. AI confined to a single PLM platform cannot address this complexity. It helps users navigate internal data but does not provide a unified view of the product across the entire value chain.
The next phase of PLM evolution must focus on intelligence that spans systems, aligns semantics across data models and builds a coherent digital thread across companies. Until that shift occurs, AI in PLM will continue to refine existing workflows without unlocking the larger transformation that modern manufacturing requires.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. Interested in OpenBOM AI Beta? Check with me about what is the future of Agentic Engineering Workflows.
With extensive experience in federated CAD-PDM and PLM architecture, I advocate for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased.
