Unless you live under the rock for the last few years, you’ve heard about DevOps. So, for starters, this is my best description of DevOps from Atlassian.
DevOps is a set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably. The concept of DevOps is founded on building a culture of collaboration between teams that historically functioned in relative siloes.
What I special like in this description is the notion of silos. In the past development and operation teams were disconnected. Not anymore and its quintessence of this connection is happening when it comes to cloud technologies and SaaS applications.
Time is running fast. Back in 2014, I wrote the article Why to ask cloud PLM vendors about DevOps and Kubernetes. Check the picture in the article, which will give you an idea of containers and the difference with virtualization.
PLM industry was slowly adopting cloud for the last decade and mostly focusing on virtualization of existing PLM applications. It was an obvious and easy step to make for 15-20 years old technology to bring it on top of existing IaaS infrastructures. However, DevOps is one of the key elements of turning hosted PLM systems into cloud services. Check my article – DevOps and Continues Delivery of PLM customization. This is my conclusion from 2016:
PLM vendors will have to invest in technologies and methods to simplify deployment, flexibility, and speed of implementations. By enabling such methods, vendors can bring a new type of PLM infrastructure in place – agile, flexible and capable to adapt to specific requirements and customer needs. The cost of customization support and delivery in such systems will be a fraction of what vendors and service providers need to spend today to customize, deploy and test PLM systems. It will enable to customize PLM within the time and speed we never have seen before.
A quick jump into 2019… From CDs to Cloud Stack and DevOps Services. The PLM deployment is still the problem. PLM vendors are focusing on optimization of how to make deployment of existing PLM stack and make it agile. The cost of these upgrades is important because the competition is heating up. Aras is selling upgrades included in their subscription, which set a new standard on PLM deployment.
However, most PLM users and service providers as still living in the era of “PLM deployments”, which is focusing on how to host existing systems on-premise on the cloud. Earlier this week, I had an interesting comments exchange around my article – How different will be a reseller model for SaaS PLM. Check the comments here.
On the other hand, maintenance delays are inherent in SaaS offering right? Coz there are inter-dependencies between the PLM software provider and the cloud service provider. A common liaison could help ease those delays and maybe this is an area which VARs (and other service providers) can look at?
A simple example – user is unable to promote a part. The dev team identifies the defect and fixes it, but the time to deploy the fix in production is lot more because the access is controlled by the cloud ops team. Even for simpler things like server restart and getting a DB dump there are lead times because of the dependency – This increases maintenance lead times.
The comment struck me with the notion of the silos – PLM system provider and Cloud Service provider. And I think, this is a fundamental gap in the way many PLM vendors still see their cloud PLM systems deployed, using a hosted model.
The alternative of such model is actually DevOps as an integrated part of engineering organization. This is the way SaaS companies are running the business. And there are some examples of how to do it in CAD and PLM industry.
Check my article Autodesk Forge DevCon 2019 – Microservices, API and DevOps. Autodesk Forge cloud development is very much resonating with the strategic technological reasons for building SaaS applications opposite to providing multi-cloud hosted solutions that can be available in any environment – on-premise, privately hosted and public cloud. Autodesk has chosen this strategy a long time ago.
Another example -Onshape. Check my article here I captured from the Onshape user group meeting in Boston last year.
At OpenBOM (disclaimer, I’m CEO and co-founder), the same DevOps principles are allowing us to make frequent updates of OpenBOM applications on a monthly basis and keep service up and running without interruptions.
In my article – SaaS PLM – build the airplane when you’re flying it, I shared more thoughts on how I see SaaS applications are different from hosted cloud systems and where the evolution of future PLM development will take us. A bit more about how I see future PLM will be developed in my article – PLM Circa 2020.
The previous generation of PLM systems were designed on top of an SQL database, installed on company premises to control and manage company data. It was a single version of the truth paradigm. Things are changing and the truth is now distributed. It is not located in a single database in a single company. It lives in multiple places and it updates all the time. The pace of business is always getting faster. To support such an environment, companies need to move onto different types of systems – ones that are global, online, optimized to use information coming from multiple sources, and capable of interacting in real-time.
Isolated SQL-based architectures running in a single company are a thing of the past. SaaS is making the PLM upgrade problem irrelevant, as everyone is running on the same version of the same software. Furthermore, the cost of systems can be optimized and SaaS systems can serve small and medium-size companies with the same efficiency as large ones.
What is my conclusion? DevOps is an essential discipline for future PLM development. Separate PLM services and PLM vendor organizations can be a thing in the past together with old PLM architectures of the 2000s… Future organizations will be built around strong DevOps core developing and delivering organization at the same time. PLM service organization will be changed. Actually, the most advanced PLM service organizations will be an extension of PLM DevOps where SRE (Service Reliability Engineers) will work alongside PLM developers. The outcome is a new level of services, which will lead to the next step of PLM software transformation. Just my thoughts…
Disclaimer: I’m co-founder and CEO of OpenBOM developing cloud-based bill of materials and inventory management tool for manufacturing companies, hardware startups, and supply chain. My opinion can be unintentionally biased.
Image by Dirk Wouters from Pixabay
Pingback: Beyond PLM (Product Lifecycle Management) Blog PLM Implementation Time Machine - Beyond PLM (Product Lifecycle Management) Blog()