Continuing the topic I started last week – How to make AI work for PLM, today I want to talk about why you shouldn’t add an “AI” label without putting proper planning in place.
In the early 2010s, every tech conversation about cloud product lifecycle management (PLM) seemed to orbit around the phrase “true cloud” and PLM software and product data management. Vendors slapped the label “true cloud” on everything from hosted servers to multi-tenant SaaS, and customers had to decode what they were actually buying.
Over time, we learned that “cloud” wasn’t a single thing – it was a set of architectural and operational characteristics. There are many “cloud” technologies, and they can be applied differently in manufacturing business processes. For some businesses, only the “true cloud” configuration – multi-tenant, browser-based, instantly upgradeable – delivered real value. Others found perfectly valid gains from hosted systems paired with new business models and vendor-managed upgrades. Some companies needed a PLM system and product’s lifecycle demanding a special level of security or specific functions such as document management, service lifecycle management, and product quality management. Without those business functions, even “true cloud” PLM systems won’t do much.
And some “cloud” projects failed completely, not because the cloud was “bad,” but because the solutions didn’t solve enough meaningful business problems to justify the cost and change.
Fast-forward 15 years, and here we are again, except the buzzword has changed. Now it’s AI. The pattern is the same: vendors stamp the AI label on everything, executives rush to “do something with AI,” and the market focuses on the newest large language model such as ChatGPT-5, Gemini, Claude, looking for the best language model or just broadly applying the “AI” buzzword on any solution without providing much detail. I already heard the term “Native AI PLM” and expect to see many more examples of using AI (I will talk about these “AI flavors” in my next article).
Another trend I can see is related to customer expectations. Everyone is “looking for AI.” Whatever problem a customer has – data transformation, supply chain management, product development process, etc. – the requests “can we add AI,” “when will you have AI,” or “do you have AI to solve the problem” are coming. I was recently interviewed by a company desperately “looking for AI” to solve their BOM management problem. While I found their problems real, the request “to have an AI to solve” was far from reality.
The problem? AI success doesn’t come from just having the smartest model or creating AI agents – it comes from the right combination of technology, data, business alignment, and execution discipline.
Based on the last two decades of watching different data management technologies introduced to the market (search, semantic web, cloud, big data, etc.), and how those projects succeed and fail, I wanted to come up with 10 (plus one bonus) pitfalls that the most advanced LLM or swarm of AI agents can’t solve for you.
The Magic Wand Myth: Data Isn’t a Fixable Afterthought
Why it matters: AI is only as good as the data it’s given. If your data is scattered, inconsistent, mislabeled, or missing key context, the AI will amplify those flaws.
Poorly structured data forces AI models to guess intent or meaning, which leads to hallucinations, irrelevant results, and wrong conclusions. Clean, well-labeled, and semantically rich data gives AI the raw material to produce useful, accurate output.
No amount of model power—or larger context windows—will “magically” fix data that was never designed to be machine-interpretable.
Not Every Task Needs the Best Model
Why it matters: Bigger models cost more, sometimes exponentially more, and can introduce unnecessary complexity or latency.
Many AI wins come from applying the simplest effective method. Parsing Excel with BOM or PDFs for invoices? That’s a lightweight extraction problem, not a multi-billion parameter reasoning challenge. Using the wrong tool inflates costs, slows performance, and complicates deployment.
The discipline is in matching the task to the right model or even to non-AI automation, so that you optimize both accuracy and ROI.
Vague or Shifting Objectives Kill AI Success
Why it matters: AI projects are inherently iterative and resource-intensive. Without a stable, measurable target, teams end up chasing moving goals or “exploring” endlessly without delivering value. Think about specific goals of your AI project – a repetitive manual product development process, labor-intensive tasks in supplier collaboration, or finding mistakes to deliver enhanced product quality. Solving the problem will deliver results.
When objectives shift mid-project, it resets learning cycles, undermines user trust, and burns budget. Clear KPIs, aligned with real business impact, provide the stability and focus needed to navigate the messy, experimental nature of AI work.
AI Strategy Must Be Business Strategy
Why it matters: Treating AI as a side experiment divorced from business priorities almost guarantees it will be marginalized or cut when budgets tighten. This is true for everything, but especially important during the hype cycles we are living in now. The alignment of real business process problems related to engineering data or PLM technology is key. If you have dirty product-related data or business systems that are not aligned and require finding duplications or automating synchronization, those are examples of “problems.”
AI isn’t just a technology play, but it’s a capability that can transform how a company operates, serves customers, and competes. To figure out how to save development costs, make supply chain collaboration faster, or shorten the development cycle – these are part of business strategies and goals. When AI goals are integrated into core business strategy, it becomes easier to prioritize funding, justify resource allocation, and measure returns in terms leadership actually cares about.
Beware Overreliance on Off-the-Shelf Models
Why it matters: Generic foundation models are trained for general purposes, not your specific domain or compliance needs. These models are impressive and can produce amazing results, but they might not be efficient or capable of understanding specific engineering or product lifecycle management data such as CAD parts, BOM types, effectivity, or component availability.
While the latest LLM is a super brain, dropping your proprietary or specific data into an LLM without adaptation often yields shallow, imprecise answers. The fix isn’t always “train your own model” (which is expensive and complex). Instead, you can wrap the base model in a well-engineered application layer: retrieval-augmented generation, prompt libraries, and domain-specific guardrails that shape outputs to your exact needs. All together, they can deliver a PLM solution or provide PLM tools to solve specific product lifecycle problems.
Ignoring AI Operations and Integration Risks Disaster
Why it matters: A proof-of-concept demo doesn’t equal production readiness. Without monitoring, rollback, version control, and error handling, even a small change in the model, API, or input data can break your workflows.
AI systems evolve rapidly—vendors update models, data drifts, and user behavior changes. If you don’t have an operational framework (MLOps) to catch these shifts, you’ll be reacting to outages and errors instead of preventing them.
Not Keeping a Human in the Loop Invites Trouble
Why it matters: AI is probabilistic, not deterministic—it will be wrong sometimes, and sometimes wrong in ways that are hard to detect automatically.
If no human checks outputs in critical workflows, you risk introducing silent errors into decisions, customer communications, or compliance processes. The cost of a single AI-driven mistake in a sensitive area (finance, legal, healthcare) can be far greater than the cost of staffing a human-in-the-loop safeguard.
Underinvesting in Change Management Kills Adoption
Why it matters: AI changes workflows, roles, and expectations. If people don’t understand how to use it, trust it, or see its value, they won’t adopt it—even if it technically works.
Lack of training and communication leads to underutilization, misuse, or even active resistance. Change management—training, support, and leadership buy-in—turns AI from a “cool demo” into a tool people actually rely on day-to-day.
Ignoring Total Cost of Ownership (TCO)
Why it matters: AI economics are different from traditional software. Inference costs scale with usage, vector databases add query fees, and prompt-engineering cycles consume developer time.
Without careful tracking, projects can turn unprofitable even if they’re delivering value. Planning for TCO means forecasting costs across compute, storage, tuning, and maintenance—and designing for efficiency from the start.
Cutting Corners on Security and Privacy
Why it matters: AI projects often handle sensitive or proprietary data, making them a prime target for leaks or breaches.
Security can’t be “bolted on” later—especially when compliance frameworks (GDPR, HIPAA, SOC 2) are in play. Designing secure data flows, access controls, and monitoring into your AI architecture from day one protects both customers and the business.
Bonus: If the CEO Doesn’t Understand AI, Good Luck
Why it matters: Executive sponsorship drives funding, resourcing, and cultural adoption. If leadership sees AI as just another IT project, it won’t get the attention or alignment it needs.
A CEO (and senior team) who understand both AI’s potential and its limitations can push through organizational barriers, set realistic expectations, and keep efforts tied to the company’s most important priorities.
What is my conclusion?
Advanced LLM models and agentic AI models are remarkable tech and tools, but tools alone don’t guarantee transformation. Just as the “true cloud” era taught us, technology labels are only the surface. Real success comes from solving the right problems with the right mix of tech, data, process, and people.
Here are three questions I’d recommend you ask before starting any AI project to ensure your solution will deliver lasting value:
- Is our data structured, clean, and ready?
- Is AI embedded into our business strategy, not just our tech roadmap?
- Do we have the operational discipline and cultural readiness to sustain it?
AI technology isn’t magic, but specific capabilities that can be applied to solve some data management problems, process automation, and decision support issues. It’s a capability. And it is fast developing. We live in an amazing time when these new tech can deliver results to support a business strategy, solve the problem of data integrity, or find a solution to establish PLM processes and communication we were not able to think of before. Build the right foundations, and the models will work for you. Use elements of the technologies and components to solve business problems and not the other way around.
Just my thoughts…
Best,
Oleg
Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased