The long July 4th weekend in the US is the perfect time to catch up on some strategic reading. AI is one of the topics that’s everywhere these days, so I took some time to explore what leading analyst firms are recommending.
Last week, I shared my thoughts about Software 3.0 and what it means for PLM development in my article PLM 2035 and Software 3.0 – What Does It Mean for Product Lifecycle Management Development? As I continue to explore new architectures of AI applications on PLM, I want to explore another emerging concept: Agentic Mesh. In particular, I want to focus on recent McKinsey research – Quantum Black – Seizing the Agentic AI Advantage. It is advertised as “A CEO playbook to solve the gen AI paradox and unlock scalable impact with AI agents”.
The first, what is Gen AI paradox? If you never heard about it, here is the kicker – Nearly eight in ten companies report using gen AI—yet just as many report no significant bottom-line impact. Consider it as a paradox. As it stated in the article “Gen AI is everywhere—except in company P&L”. So, let’s talk about it.
I found it interesting that McKinsey publication specifically addressing CEOs, while Karpathy’s ideas are more spoken in front of developers and focusing on how LLMs and so called “Software 3.0” is appealing to the developers audience.
I will compare these visions later in a separate article. For now, I want to share my reflections about the Agentic Mesh strategic vision and how it connects to PLM, digital thread, and the future of AI-enabled engineering and manufacturing.
Mesh Architectures and AI Agents: Decentralized Intelligence
In his recent commentary, David Linthicum described AI agent mesh architecture as an emerging paradigm where multiple autonomous agents are interconnected in a decentralized, tightly coordinated network. Imagine it like a mesh Wi-Fi system: each agent (or node) collaborates to share workloads and maintain continuous, adaptive performance.
In practical enterprise terms, each agent can either handle similar tasks or play specialized roles – such as supply chain forecasting, design review or validation, engineering change approve, costing analysis and finding alternatives. This architecture promises greater scalability, resilience, and adaptability compared to traditional centralized AI deployments.
However, Linthicum also cautions that mesh architectures come with significant complexity and security challenges. They require thoughtful orchestration, robust governance frameworks, and often cost 3-4 times more than simpler centralized approaches. As always in enterprise IT, the real question is whether the business value justifies the architectural sophistication.
7 Points I Captured from McKinsey’s Agentic Mesh Vision
I moved forward to check the article by McKinsey – you can download it here by creation McKinsey website account, which I did. The article offers a CEO playbook for moving beyond isolated AI co-pilots toward scalable agent-based architectures.

Here is an interesting perspective of horizontal and vertical AI agents shared by McKinsey.

Here are the seven key points I captured:
Blending Custom and Off-the-Shelf Agents. McKinsey argues that off-the-shelf agents can streamline routine workflows but true competitive advantage will come from custom-built agents aligned with unique company data and logic. In PLM, this implies building agents embedded deeply into design, sourcing, and production workflows rather than generic copilots.
Vendor Neutrality and Open Protocols. They emphasize vendor neutrality through open protocols like MCP and Agent2Agent (A2A) to avoid lock-in. This resonates strongly with PLM, where proprietary integrations still dominate and limit scalability.
The Seven Capabilities of the Agentic Mesh. These include agent and workflow discovery, AI asset registries, observability, authentication and authorization, evaluation systems, feedback management, and compliance/risk frameworks. For PLM, similar capabilities will be essential to govern AI agents collaborating across engineering, procurement, and manufacturing.
Shift from AI-Augmented to Agent-Native Enterprises. McKinsey highlights how Microsoft, Salesforce, and SAP are embedding agents natively into their enterprise platforms. For PLM, this implies moving from AI-enhanced features to agent-based workflows that actively plan, optimize, and execute tasks autonomously.
The Main Challenge is Human, Not Technical. Organizational readiness, trust models, and governance frameworks will be bigger barriers than technology itself. How do engineering and procurement teams collaborate with autonomous agents? This question will define future PLM deployments and it was very much resonating with the observations I captured last month during Share PLM Summit 2025 (check it here – What I learned at Share PLM Summit and also my 5 principles of building Human Centric PLM In 2025 )
Scalable Multi-agent Orchestration. Successful agentic architectures require orchestrating multiple agents efficiently at scale. In PLM, this could mean a network of agents working across design review, BOM coordination and analysis, procurement planning, compliance validation, EBOM to MBOM transformation, and field service optimization.
From Siloed AI Teams to Cross-Functional Squads. Finally, McKinsey stresses that AI initiatives must integrate business domain experts, process architects, AI engineers, and IT in transformation squads. For PLM, AI cannot remain a “digital experiment” but must embed directly into lifecycle processes.
My Thoughts: Agentic Mesh and PLM
Here are my reflections on how the Agentic Mesh strategy connects to PLM’s emerging development:
1. Digital thread, DTaaS, and distributed processes. The concept aligns strongly with Digital Thread as a Service (DTaaS), which requires distributed, collaborative processes across engineering and manufacturing ecosystems.
2. Adding new technology to solution complexity. While promising, mesh architectures introduce another layer of technical and organizational complexity. Companies must evaluate whether current maturity justifies adopting such architectures today. PLM is already very complex, so moving it beyond current level of complexity would be risky.
3. Emergence of vertical AI co-pilots and agents. We are already seeing specialized AI agents are slowly emerging in specific tasks and verticals – from design reviews, to BOM analysis, and procurement optimizers. This is the near-term path before decentralized agent meshes take shape.
4. DTaaS and data openness as a strong starting point. To enable any agentic strategy, data openness, standardization, and access layers are foundational. Without them, even the most sophisticated mesh architecture will fail to deliver value.
5. Agents and Mesh will come after setting up data and engineering and enterprise workflows. . While the vision is compelling, I believe practical PLM AI strategies today should focus on enabling DTaaS, vertical copilots, and data-centric infrastructure. Mesh architectures will evolve as the ecosystem matures.
What is my conclusion?
What can be a vision of “agentic PLM”? Does McKinsey provides a framework that can be valuable for CEOs and help them to understand the future of DTaaS and new federated PLM thinking moving away from existing “monolithic PLM” paradigm.
I found the concept of an agentic mesh relevant to a modern vision of PLM, especially as we envision future digital thread development, composable engineering workflows, and open PLM ecosystems. It connects to many topics I’ve discussed before – AI, data openness, agent protocols like MCP, and the evolution of PLM architectures.
However, it is still too early for adoption. DTaaS enhanced with advanced workflows (not agentic) and data openness remains the important foundation upon which agentic mesh strategies can eventually be built.
I will continue exploring how these emerging AI paradigms integrate into PLM and will share further reflections as the ecosystem evolves.
Just my thoughts…
Best, Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased