The topic of “best practices” in PLM implementations usually brings lot of controversy. Since the time PDM and PLM technologies and products moved from a toolkit approach to product and solutions, vendors and industry pundits are looking for a silver bullet how to rationalize and optimize the deployment and implementation process.
For the purpose of this article, best practices are NOT general guidance about how to make PLM implementation successful. You can find a good example of general PLM implementation best practices from Gartner here, but this is just one example.
In practice, the traditional PLM implementation approach is focusing on two specific elements – data model and processes.
What I call “best practices” is related to specific PLM implementation techniques. I first touched it few years ago in my PLM Best Practice Torpedo article. So, think about best practices are like a torpedo. When it comes as a bunch of models, processes and rules, you need to spend organizational time to apply them to the way your company is doing business. This is a first time explosion. Within the time, you’ll need spend more time to change various aspects of predefined pieces. So, this is your next explosion. After few of such explosions, I think, your model will be completely different from the original best practices.
There are many researches looking at “best practices” and trying to find a right balance between applying predefined templates. Tech-Clarity research – How to identify and implement PLM best practices is a good example. More information about the research in PTC blog here:
The short version of the results is this – The top performers focus on defining new processes, but they do it in context with the capabilities and processes supported by the software. This is not the only way to implement PLM, but those that got the most from PLM were more likely to take this approach, so it’s good advice to consider. See more in the guest blog post.
While in theory, it might be a great approach, in practice it is very hard to improve processes and software concurrently. From a practical standpoint, it means (1) to take few data models and process templates and (2) to apply configurations and customization based on company requirements.
This “two stages” approach has nothing new and used by almost all implementation and service providers. But I actually think, this is where PLM vendors and product took it wrong. The attempt to apply “standard” first and then start changing is a wrong thing to do it. What if we can think about it a reverse way (first by capturing existing processes, then applying best practices).
To understand it better, navigate to the Quartz blog related to Tesla autopilot feature – Tesla’s master plan uses its drivers to map every lane on the road. Tesla is planning to use its own cars to map roads.
That’s why Tesla is not just in the process of creating its own maps but is deciphering where each individual lane is on every road, across the globe. It’s doing this in part by tracking every one of its Model S cars each time a customer takes a drive, to learn where traffic typically moves. The project is immense but it is necessary if autonomous cars—which Tesla expects to be a reality in three years—are to work properly.
Similar approach is used by social navigation system – Waze (acquired by Google in 2013), which is using the power of users to capture maps and traffic.
By connecting drivers to one another, Waze helps people create local driving communities that work together to improve the quality of everyone’s daily driving. After typing in their destination address, users just drive with the app open on their phone to passively contribute traffic and other road data, but they can also take a more active role by sharing road reports on accidents, police traps, or any other hazards along the way, helping to give other users in the area a ‘heads-up’ about what’s to come. In addition to the local communities of drivers using the app, Waze is also home to an active community of online map editors who ensure that the data in their areas is as up-to-date as possible.
In both examples capturing of real world data, allows to improve future behavior – navigation guidance.
What is my conclusion? The best practices approach many PLM implementations took is wrong. Current approach is focusing on delivering OOTB applications with predefined functionality (best practices) and then allowing to use product configurations tools and available APIs to changes its behavior. This process is complicated, expensive and leads to significant inefficiency of PLM implementations. Now, think in reverse – PLM tools are capable to capture existing data models and processes to outsmart PLM implementations. The starting point for PLM tools should be not OOTB data models and processes, but existing organization data and processes captured by tools. Among many discussions about big data and predictive analysis, that should be one future PLM implementation technologies to focus on. Just my thoughts…
Image courtesy of winnond at FreeDigitalPhotos.net