PLM: from sync to link

PLM: from sync to link


Data has an important place in our life. Shopping lists, calendars, emails, websites, family photos, trip videos, documents, etc. We want our data to be well organized and easy to find. Marketing folks like to use the term – data at your fingertips. However, the reality is just opposite. Data is messy. We store it in multiple places, we forget names of documents and we can hardly control it.

Everything I said above applies to manufacturing companies too. But, it gets even more complicated. Departments, contractors, suppliers, multiple locations and multiple systems. So, data lives in silos – databases, network drives, databases, multiple enterprise systems. In my article – PLM One Big Silo, I’ve been talking about organizational and application silos. The data landscape in every manufacturing company is very complex. Software vendors are trying to crush silos by introducing large platforms that can help to integrate and connect information. It takes time and huge cost to implement such system in a real world organization. Which makes it almost a dream for many companies.

In my view, openness will play a key role in the ability of system to integrate and interconnect. It will help to get access to information across the silos and it leads to one of the key problem of data sharing and identity. To manage data in silos is a complex tasks. It takes time to organize data, to figure out how to interconnect data, organize data reporting and to support data consistency. I covered it more in my PLM implementations: nuts and bolts of data silos article.

Joe Barkai’s article Design Reuse: Reusing vs. Cloning and Owning speaks about the problem of data re-use. In my view, data reuse problem is real and connected directly to the issue of data silos. I liked the following passage from Joe’s article:

If commonly used and shared parts and subsystems carry separate identities, then the ability to share lifecycle information across products and with suppliers is highly diminished, especially when products are in different phases of their lifecycle. In fact, the value of knowledge sharing can be greater when it’s done out of sync with lifecycle phase. Imagine, for example, the value of knowing the manufacturing ramp up experience of a subsystem and the engineering change orders (ECOs) that have been implemented to correct them before a new design is frozen. In an organization that practices “cloning and owning”, it’s highly likely that this kind of knowledge is common knowledge and is available outside that product line.

An effective design reuse strategy must be built upon a centralized repository of reusable objects. Each object—a part, a design, a best practice—should be associated with its lifecycle experience: quality reports, ECOs, supplier incoming inspections, reliability, warranty claims, and all other representations of organizational knowledge that is conducive and critical to making better design, manufacturing and service related decisions.

Unfortunately, the way most of companies and software vendors are solving this problem today is just data sync. Yes, data is syncing between multiple systems. Brutally. Without thinking multiple times. In the race to control information, software vendors and implementing companies are batch-syncing data between multiple databases and applications. Parts, bill of materials, documents, specifications, etc. Data is moving from engineering applications to manufacturing databases back and forth. Specifications and design information is syncing between OEM controlled databases and suppliers’ systems. This data synchronization is leading to lot of inefficiency and complexity.

It must be a better way to handle information. To allow efficient data reuse, we need to think more about how to link data together and not synchronize it between applications and databases. This is not a simple task. Industry that years was taking “sync” as a universal way to solve problem of data integration cannot shift overnight and work differently. But here is a good news. For the last two decades, web companies accumulated lot of experience related to management of huge volumes of interconnected data. The move towards cloud services is creating an opportunity to work with data differently. It will provide new technologies of data integration and data management. It also can open new ways to access data across silos. As a system that manage product data, PLM can introduce a new way of linking information and help to reuse data between applications.

What is my conclusion? There is an opportunity to move from sync to link of data. It will allow to simplify data management and will help to reuse data. It requires conceptual rethink of how problems of data integrations are solved between vendors. By providing “link to data” instead of actually “syncing data”, we can help company to streamline processes and improve quality of products. Just my thoughts…

Best, Oleg


Share This Post

  • It’s a compelling argument: federation over replication. What’s has been the barrier up to this point? My guess is API’s and data ownership – most systems aren’t too happy accepting external manipulation, especially when you cross those vendor boundaries. Maybe some kind of integration standard is called for?

  • bausk

    I think this problem will persist as long as API itself, as we know it, is the central paradigm of data interchange.
    Currently, any API is a way for a person to speak to the machine (disregarding that, technically, an API client app plays the role of a stand-in for the person who wrote it). We need to learn how to make machines speak to machines.

  • beyondplm

    Ed, the topic of “federate” vs. “replicate” isn’t new. Companies developed federated approach 15-20 years ago. I don’t like data ownership definition, since it is often turns into political debates between departments and IT. I prefer to discuss it in the context of data consistency and availability of data for business processes. I’m not aware about any standards here. You?

  • beyondplm

    Alexander, I guess, new paradigms can be developed besides API. Think about Linked data. It is still very-very early in terms of maturity, but long term thinking we have to stop syncing data between all repositories and systems. It must be a better way :).

  • bausk

    Oleg, would you recommend any source on current state of Linked data? I would argue that all the key technology is basically already here, there are mostly cultural and organizational issues extant.

  • beyondplm

    Alexander, The obvious start is here – Google will take you forward. There are lots of resources available online.

  • The challenge here is effective linking and security are diametrically opposed philosophies. An old problem, but a challenge for the future nonetheless.

  • No aware of any standards either.
    I was thinking data ownership from a vendor perspective and less from a business perspective. The latter can be changed via culture, the former is more of an IP play on account of the vendors that may limit progress. Maybe some will break from the pack.

  • beyondplm

    Linking can provide more organized and structured way to security. What we have today is synchronizing data between systems or export /import via Excel. The last is providing too many points for security breach. Just my opinion.

  • beyondplm

    Data ownership from a vendor perspective is going to die. This is just a matter of time, in my view. However, to discuss data ownership between departments and functions can have an interesting perspective.

  • Pingback: Beyond PLM (Product Lifecycle Management) Blog » PLM and enterprise silos in networked age()

  • Pingback: PLM and enterprise silos in networked age | Daily PLM Think Tank Blog()

  • Pingback: Beyond PLM (Product Lifecycle Management) Blog » Dead end of product data ownership()

  • Pingback: Beyond PLM (Product Lifecycle Management) Blog The challenges of distributed BOM handover in data-driven manufacturing - Beyond PLM (Product Lifecycle Management) Blog()