Imagine you made a choice for what PLM system to use in your company? The process was long, you run tons of meetings, benchmark, evaluations, comparisons and you’re finally done. You think, the most painful part is over, but it is actually not. The next step can be even more painful. You need to deploy (or in simple words to install) PLM software, which includes tons of infrastructure work – servers, configurations, setting up and finally being able to run a magic PLM system in your browser or on your desktop.
Siemens article earlier this week PLM guide- what’s new in deployment center takes us into the reality of PLM system deployment. The article is a bit scary:
First, To learn more about Deployment Center 3.0 a set of instructional videos has been created. These videos demonstrate the latest capabilities in such as how to perform a major Teamcenter upgrade with the Deployment Center. In addition, more videos that cover topics such as troubleshooting scanning environments that contain EWI, deploy metrics, load balancers and installing software only supported by the Deployment Center.
The words like “major upgrade”, “troubleshooting scanning” or “load balancers” are not the word you want to hear in the 21st century. The consumerization of technologies and the modern approach in cloud system availability narrow the question about system installation to a simple question – what is URL? Nobody wants to install anything these days. Unfortunately, most PLM software packages have roots in the 1990s and includes a complex history of mergers, acquisitions, and transformations. With some exceptions, it can be said about all PLM vendors.
The problem of deployability is not simple. You can find an entire section in Siemens PLM blog about it – PLM Deployability. You can see articles about rapid start deployment, AWS deployments, and some others.
AWS option is actually my favorite. Siemens is reporting few interesting stories about Teamcenter on AWS. Here is what Siemens is saying:
Teamcenter customers around the world are discovering the value of deploying product lifecycle management (PLM) on the cloud using Amazon Web Services (AWS).
Teamcenter is certified for cloud deployment to address the growing need for our customers to manage PLM as an operational expense, rather than an up-front investment in hardware, software, and consulting. Cloud deployment gives customers the world’s most widely implemented PLM software at an apealing price for many small to medium sized businesses.
NP Innovation, a developer of water treatment solutions, was one such company. With minimal IT staff and budget, and zero up-front investment, they recently implemented Teamcenter on the cloud using Amazon Web Services (AWS) in just 10 days.
Siemens claimed last year, Teamcenter is the only PLM platform that passed AWS certification. I’m not sure if this true, because I can also see article hinting that Windchill also can be deployed to AWS or selected other providers. Check this one.
So, where do you think PLM deployments are going? As much as I can see, PLM vendors are focusing on how to simplify deployments and upgrades. I can understand the reason behind that – the demand of customers not to get involved in messy PLM installations, upgrades and deployment are high. Therefore, vendors like Aras are taking ownership of upgrades and selling it as part of their subscriptions.
At the same time, there is a limit of how you can make old systems installed, deployed and upgrades seamlessly in the cloud environment. The infrastructure and technology are creating a barrier to make it easy. Also, a single-tenant architecture of most existing PLM systems created limits on taking actual PLM subscription costs down. The actual load and performance time of PLM servers is low and it will use an expensive AWS infrastructure in a similar way as a server in the IT department server rack. Also, deploying two Teamceters in isolated AWS servers won’t solve a problem of collaboration between companies as it was promised by cloud PLM marketing.
The solution to the problem is to move from a traditional PLM stack to a cloud PLM stack. Check my earlier article – Cloud and Global PLM stack transformation. It will give you an idea of how much benefits, manufacturing companies can gain from focus on new PLM stack and transformation of PLM systems into new modern infrastructure and architecture.
The future of software deployment is not binary. An organization must evaluate their best deployment and consumption options. A single target (on premise, AWS or hosting data center is not the right way to do it. An organization must plan how to distribute data and services in the right way to ensure the architecture is optimized and data is available.
The future of software deployment is actually software code. The automation of software deployment is starting from architecture infrastructure, tech stacks. The application code written now is enormously better than 10-15 years ago. Huge achievements were made by frameworks like Spring, Rails, Node, and Spring Boot. The DevOps and experience of Kubernetes is completely different from WebSphere era most of PLM software was originally developed. But the key element is to automate it as a single stream of software deployment.
What is my conclusion? From free upgrades to no upgrade, from improved deployment to fully automatic cloud stack managed by DevOps services. This is a transformation we will is in PLM. Many other enterprise software already passed this stage. But PLM business is very sticky. Systems that were deployed 15-20 years ago are still alive and waiting for a perfect moment to be disrupted and replaced by new PLM systems coming. Just my thoughts…
Disclaimer: I’m co-founder and CEO of OpenBOM developing cloud-based bill of materials and inventory management tool for manufacturing companies, hardware startups, and supply chain. My opinion can be unintentionally biased.