Data vs. Process. The egg or chicken of PLM industry. This topic is near and dear to many people in PLM ecosystem . What comes first and why? My attention was caught by Jos Voskuil blog post – Mixing past and future generations with a PLM sauce. Have a read and make your opinion. I liked the following passage:
This culture change and a different business approach to my opinion are about modern PLM. For me, modern PLM focuses on connecting the data, instead of building automated processes with a lot of structured data. Modern PLM combines the structured and unstructured data and provides the user the right information in context.
Link is a powerful word. I appreciate the power of data connection. I reminded one of the writeups I did in Inforbix blog – Product Data: The Power is in the link more than one year ago. It goes back to Richard Wallis’ presentation on Semantic Tech and Business 2013 in Berlin.
…the power of the links in Linked Data – of the globally unique identifiers of things and relationships described by URIs (Uniform Resource Identifier) in RDF – for more seamlessly interconnecting data within users’ own domains and with other data in other domains, too…
Jos’ commentary made me think about process vs. data again. I addressed this topic few times in my past blogging. My first attempt was PLM: Controversy about Process and Data Management. I wanted to emphasize my strong believe in the need to solve the problem of product data access in an organization.
…the failure to design data access in organizations, was a recipe for disaster for many PLM implementations. PLM programs were focused on “how to improve processes” and forgot about how to put a solid data foundation to support cross-departmental process implementations. So, I’d like to put a quote from Bell Helicopter’s presentation during DSCC 2011 as something PLM vendors and customers need to remember – “to get the core data right first”. Just my opinion, of course.
My next attempt to talk about data vs. process was earlier this year. The discussion was triggered by Tech4PD dialog between Jim Brown and Chad Jackson. It was precisely named PLM’s chicken or Egg Scenario. In a bit confusing (to me) voting between “going beyond file control data” and “data beyond engineering has to be centralized, secure and accessible to PLM”, I decided a process is more important. I explained myself in the post – PLM: Data vs. Process – Wrong Dilemma? My conclusion that focus on product lifecycle – a data set combined information about the product (data) with information about process (lifecycle).
The debate made me think about why Data vs. Process is probably a wrong dilemma in the context of PLM. In my view, the right focus should be on “lifecycle” as a core value proposition of PLM and ability of PLM to support product development. In a nutshell, product development is about how to move product definition (in a broad sense of this word) from initial requirements and design to engineering and manufacturing. If I go future, next stages of product definition will be related to maintenance and disposal. To define what represent product on every stage together with what is required to move product from one stage to another is a core value of product lifecycle and PLM.
What is my conclusion? I agree with Jos. Business is getting more data sensitive these days. Time ago, the value of data wasn’t predominant like today in our Google era. It is clear to everybody that “data matters” and the best you can do to prove your point is to bring “data points”. This is why the ability to bring “linked data points” about a product becomes so valuable. This is the next PLM turn. Just my thoughts…