A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

PLM 2035 and Software 3.0: What Does It Mean for Product Lifecycle Management Development?

PLM 2035 and Software 3.0: What Does It Mean for Product Lifecycle Management Development?
Oleg
Oleg
29 June, 2025 | 11 min for reading

The last few weeks, I’ve been spending a lot of time planning growing AI development of OpenBOM and thinking about connecting many dots between overwhelmingly fast AI driven software development progress and aging traditional PLM platforms. How to develop PLM 2035 strategy by running PLM databases developed in 1990s and beginning 2000s?

Let’s talk about it today…

Software is Evolving Faster Than Ever

Former Tesla AI Director Andrej Karpathy recently shared a very interesting insights on how software development is being fundamentally transformed by artificial intelligence. Addressing students and industry newcomers, Karpathy explained that we are entering a unique era where the very nature of software is evolving, introducing new programming paradigms fueled by AI — notably large language models (LLMs). With decades of experience in pioneering AI-driven technologies such as Tesla’s Autopilot, his perspective offers a profound look into the past, present, and future of coding, automation, and software design.

Here is what I captured that can help you understand the changing landscape of software in the AI era and what it means for the evolution of PLM systems — from rigid toolboxes to AI-native platforms ready to understand human intent.

Software Through the Ages: From 1.0 to 3.0

We can frame the software evolution across three transformative eras. Software 1.0 is the classical programming paradigm we all grew up with, where humans write explicit instructions in languages like C++, Python, or Java to tell computers what to do. Every function, condition, and output must be coded line by line. This is the era that gave us operating systems, CAD tools, ERP, and early PLM systems developed in 1990s (MatrixOne, Agile, Arena Solutions, Teamcenter, Solidworks PDM, SmarTeam, and many others). It was powerful but inherently limited by how much humans could program manually.

Then came Software 2.0, where neural networks began to replace handcrafted logic. In this paradigm, behavior is no longer written as explicit instructions but encoded in model weights, learned from data through training and optimization. Programmers shifted from writing every line of code to crafting datasets, defining loss functions, and tuning models to solve tasks like image recognition, speech processing, or autonomous driving. Karpathy described how platforms like Hugging Face became the GitHub for neural models, enabling developers to share, reuse, and improve trained AI systems just as they had done with traditional code libraries (it is a very interesting example for PLM developers). Nothing is here done (yet) by vendors in engineering community (if you’re aware about something, please share)

Finally, we arrive at Software 3.0, the era of LLMs. Here, neural networks become general-purpose reasoning engines programmable through natural language itself. English — or any human language — effectively becomes the programming interface. Instead of writing formal code, you now “program” by crafting prompts, instructions, or conversations. This shift opens doors to unprecedented accessibility and expressivity, allowing people who never wrote code to build workflows, apps, and analyses purely by describing what they want. To trigger your imagination, think about PLM development and implementations look likes a conversation between a PLM business analyst and a customer.

LLMs as the New Foundation: Electricity, Operating Systems, and Semiconductor Fabs

I can see what LLMs represent in today’s tech ecosystem. Think about electricity model that came everywhere powering appliances in our houses. The same analogy can be applied to compare LLM powering countless applications on demand. At the same time, training these massive models requires investments comparable to creation of electrical power stations and state level infrastructure development – concentrated capital, research, and expertise to produce them at scale. Yet LLMs are more than just passive infrastructure; they behave like complex operating systems, orchestrating memory, compute, and tool use, becoming a kind of cloud-based CPU accessible over networks.

Because of their computational demands, LLMs remain mostly centralized cloud services today, reminiscent of the time-sharing mainframes in the 1960s. However, as models become more efficient, personal LLM computing will emerge on edge devices for specialized tasks. Analogy between Mainframes and Windows and PC is very interesting. Think about closed-source providers like OpenAI and Google Gemini, and open alternatives like Meta’s LLaMA – mirroring the old Windows versus Linux divide.

LLMs as “People Spirits”: Strengths and Limitations

Think of LLMs as a simulations of people trained on vast amounts of human-generated text. These models possess superhuman memory and encyclopedic knowledge, capable of recalling facts, synthesizing insights, and answering questions across domains with remarkable breadth. At the same time, they carry fundamental limitations: hallucinations, factual errors, and a lack of long-term memory that prevents them from consolidating knowledge over time like humans do. Their intelligence is jagged and stochastic; they can perform brilliantly on one task and fail unpredictably on another. Add to this their vulnerability to prompt injections and data leakage, and it becomes clear that building AI-powered applications requires carefully navigating both their power and their flaws.

Therefore, the immediate future of software lies in partial autonomy applications that blend human oversight with AI capabilities. For example, AI-assisted coding tools like Cursor allow developers to offload routine or complex tasks to AI while maintaining final control. Features like GUI-based diffs make it easy to verify suggested code changes, and autonomy sliders let users adjust how much freedom they give to the AI, from simple autocomplete to generating entire code blocks.

I liked the example of from Tesla Autopilot development. There, neural networks gradually replaced handcrafted software for perception, but human supervision remained critical as autonomy expanded incrementally. Breakthroughs that looked rapid from the outside were, in reality, the result of years of careful engineering and iteration. This analogy reminds us to temper our expectations for fully autonomous AI agents and instead focus on building robust systems where AI amplifies human productivity while remaining under human judgment.

Democratizing Software: The Rise of Vibe Coding

One of the most exciting implications of Software 3.0 is the concept of vibe coding — programming by simply describing what you want in natural language. This shift lowers the barrier to software creation, enabling non-programmers to build workflows, apps, and automations purely through conversation with AI.

Yet he cautions that while code generation is becoming trivial, the hard parts remain: deploying production-grade systems, building infrastructure, managing devops, authentication, and scaling. The gap between writing code and delivering working software is still wide and will continue to demand innovation in deployment and operational processes.

Building Software for AI Agents: Meeting LLMs Halfway

How we can build a software that can be used both by humans and AI agents. Think of websites created robots.txt to guide crawlers, we might see lm.txt files emerge to instruct LLM agents how to interact with domains. I’m learning how Stripe is already reformatting their documentation into AI-consumable markdown formats, recognizing that traditional docs full of “click here” instructions are not useful for agents needing machine-executable commands. Protocols like Entropic’s Model Context Protocol hint at future standards for direct AI-agent interactions, unlocking massive automation potential while reducing integration friction.

Graph Based Data Models, Digital Silos and Digital Thread

While Large Language Models (LLM) bring powerful natural language reasoning and conversational capabilities to PLM, they still suffer from inherent limitations such as hallucinations, inconsistent factual recall, and a lack of structured memory. This is can be a real issue – imaging a BOM “where used” request that brings only part of the results (not all assemblies)?

This is where graph-based data models provide an essential foundation. Unlike rigid relational schemas or unstructured text repositories, graph databases capture the semantics of product data — parts, assemblies, suppliers, configurations, and workflows — in a flexible yet structured way. By organizing information as interconnected nodes and relationships, graphs eliminate data silos and create a holistic, navigable knowledge base. When LLMs are combined with graph-based data models, the AI gains reliable access to verified, structured data, reducing hallucinations and enabling precise contextual responses. In this way, graph models ground LLM capabilities within a solid digital thread architecture, ensuring AI interactions remain accurate, traceable, and deeply connected to the real product and organizational data they represent.

The Technology vs. Transformation Debate in PLM

This brings us back to PLM. Many PLM practitioners and consultants argue that technology is not the biggest issue in PLM implementations. They emphasize that organizational transformation, process alignment, and cultural change are what truly determine success. And indeed, these human and business dimensions are critical. Yet to me, this remains an interesting and unresolved debate.

While I fully agree that organizational transformation is essential, we cannot ignore the reality that much of PLM technology in use today is built on architectures and data models designed 25 or more years ago. These legacy foundations impose significant limitations on what PLM can achieve in the modern era. As the rest of the software world advances rapidly toward AI-native, graph-based, and composable architectures, PLM systems still burdened with rigid monolithic designs will struggle to keep up.

Future-ready PLM requires both — the organizational readiness to embrace change and the technological leap to unlock what is possible. Without modern, scalable, and intelligent infrastructure, even the best transformation programs will remain confined by the old tools they rely on.

The Four Generations of PLM Development

This software evolution is mirrored in how PLM systems themselves have evolved over the past decades. The first generation of PLM systems were little more than heavy toolboxes. They offered frameworks and APIs, but to achieve anything meaningful, companies needed deep programming expertise to customize data models, workflows, and user interfaces. Implementation was code-driven, expensive, and often took years before end users could see real value. You might remember first generation PLM platforms that still around us, but slightly modified their look and feel.

The second generation brought configurable object data models built on relational databases. These systems became more accessible to administrators and business analysts, who could define items, BOMs, and workflows without hardcore programming. However, they still carried the inherent rigidity of relational schemas and struggled to represent complex product semantics or multi-domain relationships naturally.

Today, we are entering the third generation: Digital Thread as a Service (DTaaS). This era is defined by polyglot data architectures that combine SQL, document, and graph databases to store different product data types optimally. Graph-based data models capture the semantics of product structures, configurations, and multi-system relationships in a natural way. AI is emerging here as a focused enabler, automating specific vertical tasks like BOM comparison, classification, and procurement suggestions while also enhancing user experiences with conversational interfaces. In this model, AI acts as a productivity amplifier rather than a replacement.

Looking ahead, the fourth generation of PLM will fully embrace Software 3.0 principles. AI-native PLM systems will configure themselves through natural language instructions. Users will describe workflows, BOM structures, or integration needs in plain language, and the system will assemble and optimize itself dynamically. PLM will transform from a complex tool to an intelligent digital engineering partner, capable of understanding intent, reasoning about product data, and continuously adapting to business changes.

Thoughts about PLM Development – Limitations and Future Trends

CAD and PLM platforms (whether we call them BOM tools, digital thread solutions, or product development platforms) can only be built on a strong data management foundation. Data management is a critical differentiator in building modern PLM software. It is important to observe current trends and see examples that demonstrate what is happening.

For a long time, I have advocated for graph models and polyglot persistence as a strategic foundation for modern PLM platforms, focusing on the creation of data products rather than “yet another out-of-the-box PLM tool using relational database technology.” I advocated for manufacturing graph vision and development of manufacturing networks and I can see how other PLM developers continue to follow this path doing it for new vendors, research work and graph based PLM software evolving into development infrastructure. To me all these examples of growing adoption of a new data management architecture and tools.

I recently read a linked in post from a PLM software vendor announcing that they decided to shelve years of proven work and rebuild from zero. Doing it a year after investment round means a lot about what happens in the PLM development. To me, this is a remarkable example that demonstrates a growing understanding of the limitations of traditional PLM software architectures.

While leading PLM vendors are not announcing the discontinuation of their 30-year-old platforms (yet), I can see how these platforms will gradually decline, with growing interest in adopting modern graph-based data models, polyglot data management architectures, AI-platforms, new ways to development implementations and supporting human centric PLM vision.

What is my conclusion?

PLM is entering the software 3.0 era. Software is changing faster and more fundamentally than ever before. For PLM, this means moving beyond current PLM development architectures that require programming before productivity and beyond SQL-bound object models. Rethinking PLM data modeling by using a polyglot data management architecture, adding whatever needed databases and LLMs to produce results instead of developing (yet another PLM editors). The future lies in new data management architectures and AI-driven systems that understand human intent and can adapt themselves in real time to support engineering and manufacturing excellence. The road ahead is rich with invention, and those building these foundations today will define the PLM landscape for decades to come.

Just my thoughts…

Best, Oleg

Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased

Recent Posts

Also on BeyondPLM

4 6
30 October, 2011

Let me ask you a very silly question? How many times you abandoned you official company policy for data sharing...

14 November, 2012

Two words today are raising lots of discussion and controversy – Data and Openness. We live it everyday by hearing...

25 June, 2019

Blockchain is a new technology that will revolutionize the world of business communication and relationships. Unless you’ve been living under...

29 December, 2010

I read The PLM State: PLM Migration, No Data Left Behind on Zero-Wait State blog. Read it and make your...

4 June, 2009

I’d like to discuss a topic which is probably the most “non technological” topic I have ever discussed in this...

15 April, 2020

Do you know what’s Conway’s Law? I didn’t… Until last week. I was learning how organizational structure can impact the...

28 April, 2019

Digitalization is one of these words that we hear more often these days. The term is used very broadly. Here...

22 December, 2014

Few month ago, I shared the story of True & co – company actively experimenting and leveraging data science to...

20 May, 2013

Searching for information is a tricky thing. Search may sounds as a simple operation, but in fact,  it is translated...

Blogroll

To the top