How to transform PLM system into ‘Insight engine’

How to transform PLM system into ‘Insight engine’

plm-insight-engine

The perceived value of product lifecycle management (PLM) is to help companies to boost innovation and improve product development, operation and service processes. It is indeed very important, hence there are so many discussions in PLM industry about how to improve processes in general. However, most of these discussions are stuck in PLM implementations. Existing PLM systems reached their limits and analog PLM implementations are demanding PLM hero to make it through.

I’ve been reading an excellent HBR article Building an Insight Engine (https://hbr.org/2016/09/building-an-insights-engine) over the weekend. The authors – execs from Unilever and Kantar Vermeer describe the elements of the insights engine and show how it works at consumer goods giant Unilever.

One of the most interesting things that caught my attention was about data synthesis.

What matters now is not so much the quantity of data a firm can amass but its ability to connect the dots and extract value from the information. This capability differentiates successful organizations from less successful ones: According to the i2020 research, 67% of the executives at overperforming firms (those that outpaced competitors in revenue growth) said that their company was skilled at linking disparate data sources, whereas only 34% of the executives at underperformers made the same claim.

Another CMI program, PeopleWorld, addresses the problem “If only Unilever knew what Unilever knows.” Often the answer to a marketing question already exists in the firm’s historical research; finding it is the challenge. But using an artificial intelligence platform, anyone within Unilever can mine PeopleWorld’s 70,000 consumer research documents and quantities of social media data for answers to specific natural-language questions.

Data intelligence, analysis, big data, customer information made me think about data integration in manufacturing.

PLM integrations are not easy

For a long time, integration challenge is one of the biggest challenges in manufacturing. Especially when it came to PLM implementations. I can see 3 main problems standing in front of any PLM vendors that will try to improve “data integration and intelligence” of PLM implementations.

1. Create a common set of data elements.

This is so important and it is often missed in many implementations. The challenge of last generation of PLM systems is to come with some out-of-the-box best practices that can serve as a starting point in any implementation. So, businesses are starting from some ready templates and often stuck with changes. At the same time, company should have a full flexibility to define descriptive data models that can help to conduct business insight and decision making processes.

2. Create one “version of truth”

How to form a single trustful data representation? This is the most critical question. Data is duplicated in many manufacturing and enterprise systems. For many years, companies were creating islands of data. This activity was driven by operation excellence and interest of department to divide and concur in everything that was related to enterprise software. So to create “version of truth” that is crossing IT and department boundaries is not a simple task.

3. Integrate disparate datasets

System must onboard any new data quickly, clean up, index, create relationships and process it to the form that can allow support business programs and initiatives. Existing PLM data is too focused on engineering and PLM vendors have hard time to acquire data sets outside of engineering departments.

These are critical requirements I can see standing in front of any organization thinking how to move from old fashion enterprise data control mechanisms into insight engine.

What is my conclusion? For many years, PLM platforms were focusing on how to control data in a single company. It started from CAD files and related engineering data. These days it is expanding into related domains of manufacturing, services, etc. However, to solve data management for a single department or even a whole company will only perpetuate the “data silos” problem. The future is belonging to new type of data management platforms capable to connect data across multiple domains and companies; expanding into cloud, connected products and big data. Just my thoughts…

Best, Oleg

Want to learn more about PLM? Check out my new PLM Book website.

Disclaimer: I’m co-founder and CEO of openBoM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.

Share

Share This Post

  • Lars Taxén

    A key insight, I would say, is that the same piece of data may or may not be relevant depending on in which context it appears. So, for example, the color of a car may be highly relevant in marketing but less relevant in manufacturing. The context may be called different things – domain, function, business unit, silos or whatever – but contextualization is a fact of life that doesn’t go away because we want it to. Silos are not problems, they are inevitable. Another key insight, then, is how to maintain local ‘silo’ integrity while at the same time secure the necessary connection between them, i.e. not making them into isolated islands. Future data management platforms need to acknowledge this, above all that contextualization drives data interpretation. Thus, there can be no “truth” regarding data; only relevance or not for taking actions. It’s difficult to see from the article exactly how Unilever has done, but it seems to me that they did a good job in accessing and collecting such data that have some kind of relevance in a particular context. Just some additional thoughts….

  • beyondplm

    Lars, thanks for sharing your insight! I agree completely – silos are inevitable. New data management solutions have to figure out how connect and correlate data between them.

  • Eric Milgram

    Hi Oleg,

    As always, I really enjoyed this post, and you raised some really great issues that I bet the vast majority of people interested in PLM will not realize until 5 – 20 years from now! On the topic of PLM, I see so far too much focus on system architecture and implementation at the expense of understanding the organization’s current state, setting a compelling vision for the future that isn’t just the PLM system vendors’ marketing material repackaged with the company lexicon, and making a strong effort to really understand the change impacts necessary to go from current state to ideal state.

    Although there is no doubt that system architecture and implementation are critical aspects of a PLM journey, without an obsessive focus on organizational structure and its attendant impact on data quality, the resultant value of any PLM implementation is going to be greatly diminished. Additionally, the ROIs that are frequently promised, such as a year 1-3 productivity gain of between 1% and 5% of raw materials costs, will not be realized. Years of neglecting data quality cannot be erased simply by bringing in a new software system.

    In the PLM space, I see far too much constrained thinking. Companies who view PLM simply as replacing paper specifications stored in metal filing cabinets with specifications stored as PDFs in document management systems are akin to people who insisted on buggy-whip holders in their first automobile. Sears-Roebuck used to spend all year working on their catalog, and then, once it was printed, it was static for the entire year. Contrast that model with the Amazon model of today. The Amazon catalog changes thousands of times per hour based on customer demand, supplier availability, etc. I’ve even had some people say to me, “BUT OUR PLANTS STILL REQUIRE PAPER!,” to which I reply, “Do they not have computers and printers in those plants?”

    Some companies are a little more sophisticated, but they have unrealistically high expectations for what their IT groups can accomplish with master data management systems, as well us unrealistically low expectations for the amount of effort required by the various non-IT business units for data governance and data quality. Data quality assurance should be just as (if not even more) important than physical product quality, but in my experience, it’s an afterthought. Imagine where companies like Netflix, Facebook, or Google would be if they were not obsessive about harvesting, cleansing, assessing, and processing data.

    Even if an R&D organization within a very large company is able to standardize their workflows and business processes globally, the challenge of R&D successfully linking with all of the various operations groups around the globe is daunting. I see far too many organizations ignoring that scale factor.

    When the issue is raised in meetings, someone inevitably produces a Power Point slide with various business units in different geographic regions represented as SmartArt connected by arrows. Next, they proudly proclaim, “All of our company’s ERPs have web-services, and so does our MDM. Integration is trivial!”.

    If only real life were that simple. R&D groups are frequently organized very differently than the business units they serve. Too many R&D people do not appreciate the complexity that results from that misalignment. Even factors such as the number of time zones and languages within a business unit can be a stumbling block to roll-outs that is completely ignored (until the roll-out is compromised), especially by US based personnel who are accustomed to three time zones and one language in a very large market.

    For any enterprise software project involving R&D at a large company, one of my first actions is to pull the last three – five years of the company’s annual report and look at its business units with an eye towards geographical distribution of research sites, manufacturing plants, fraction of company-owned vs non-company owned manufacturing, scale of revenue/earnings, and rate of change of revenue/earnings. I also look for signs that the company hasn’t stabilized their organizational structures, which is most easily observed in the accounting statements. Obscuring such dysfunction within R&D is much easier. Next, I compare and contrast the finance-driven structure of the organization with the stated R&D structure.

    The really sharp R&D leaders see value in this approach, but I’ve had more R&D leaders than I care to count ask me why such an analysis is important, since R&D is a centralized function. How do I really feel when I get that question? Well, let’s just say that when that happens, I try hard not to express what I am feeling.

    You and I will be in Orange County week after next at the PI OC meeting. At that meeting, I’m chairing a focus-group in the PLM2.0 session on Thursday. Prior to reading this post by you, I was worried that nobody in the audience would grasp the perspectives I plan for us to discuss, including modern data science concepts and why business leaders need to understand them for a successful PLM implementation. After reading your post, I can rest assured that at least one person at the meeting will understand the points that I am trying to make!

  • beyondplm

    Eric, thanks a lot for sharing your insight in the comment! The implementation and level of complexity of projects you’re talking is high. It is still beyond the level of average IT organization that focusing on “controlling” engineering documents and data. But Amazon example is very relevant.

    Look forward to meet you and talk at PI OC.

    Best, Oleg