A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

How Edge computing can help to manage distributed PLM processes

How Edge computing can help to manage distributed PLM processes
Oleg
Oleg
24 August, 2016 | 4 min for reading

edge-computing

“Single version of truth” is one of the most popular paradigm developed by PLM vendors for the last 10-15 years. The concept is loved by many companies and it has many powerful points. At the same time, the devil is in details and wrong implementation strategy can diminish the value of PLM single version of truth. One of the biggest challenges PLM implementation might have is related to distribution of manufacturing companies across the globe and distribution of product development and manufacturing processes. Replication of data is costly and not always possible. Most of existing PLM architectures are relying on variety of data synchronization cycles. A question how to manage distributed product development processes is one that on the table for many IT and PLM system managers.

Forbes article Will analytics on the edge be the future of big data brings an interesting perspective of “Edge computing”. If you haven’t heard about that, take a look on another article – What is Edge Computing by TechTarget for additional explanations.

In a nutshell, the idea of edge computing is simple – process data near the data source and optimize traffic by splitting data transfer between the data that needs to be synchronized immediately and information that will be transferred later for different purposes such as archiving and record keeping. The following passage can give some examples how edge computing can be applied to IoT domain.

Sometimes known as distributed analytics, it basically means designing systems where analytics is performed at the point where (or very close to where) the data is collected. Often, this is where action based on the insights provided by the data is most needed. Rather than designing centralized systems where all the data is sent back to your data warehouse in a raw state, where it has to be cleaned and analyzed before being of any value, why not do everything at the “edge” of the system?

A simple example would be a massive scale CCTV security system, with perhaps thousands or tens of thousands of cameras covering a large area. It’s likely that 99.9% of the footage captured by the cameras will be of no use for the job it’s supposed to be doing – e.g. detecting intruders. Hours and hours of still footage is likely to be captured for every second of useful video. So what’s the point of all of that data being streamed in real-time across your network, generating expense as well as possible compliance burdens?

Wouldn’t it be better if the images themselves could be analyzed within the cameras at the moment it is captured, and anything found to be useless either discarded or marked as low priority, freeing up centralized resources to work on data of actual value?

It made me think about PLM and distributed processes management. Modern process management for product development, manufacturing and supply chain can apply similar principles to enable data proliferation and distributed data processing. Think about heavy CAD file located on a desktop. Fast data processing can be applied to extract valuable data needed for decision making while rest of data will be delivered later. Similar to that manufacturing process analytic and supply chain optimization can taken faster by recombining fast processed data across the globe. While long synchronization of data can take time, fast processing of data on edges of distributed system can improve decision process and enable processes that took 24 hours cycle before.

What is my conclusion? We are coming to the era of new version of “single point of truth”. The truth is distributed and processed over the time. Companies are going to discover new data processing paradigms relying on availability of powerful computing devices everywhere and network computing. Existing PLM platforms are mostly database driven and relying on centralized data processing. New concepts of parallel data processing can combine edge computing with asynchronous data processing and analysis to deliver better process and decision management to manufacturing companies. Just my thoughts…

Best, Oleg

Want to learn more about PLM? Check out my new PLM Book website.

Disclaimer: I’m co-founder and CEO of openBoM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.

Image credit Forbes article 

Recent Posts

Also on BeyondPLM

4 6
7 October, 2009

Two published stories drove my attention and got me to think again about various dimensions of social PLM development. Dion...

17 July, 2010

PLM is heavy focused on workflow, processes and workflow automation. What if… all these capabilities will be coming from cloud...

4 January, 2021

Manufacturing has been used PLM for a long time to track and manage product data design and changes in the...

24 April, 2018

I’m in Las Vegas at SuiteWorld 2018. Annual event organized by Oracle NetSuite – one of the largest cloud ERP...

18 December, 2009

Siemens’s blog post by Nik Pakvasa and following discussion drove me to put my thoughts in the this direction- how...

28 November, 2010

Google Wave Dead. Long live Wave In a Box (WIAB). Navigate your browser to the following link and you will...

14 July, 2014

Selling PLM for SME was always a very controversial topic among PLM vendors. No consensus here. I wrote about it...

16 January, 2014

Last year I was writing about PLM and PaaS dilemma. As we move more into diversity of cloud PLM options,...

2 June, 2019

I’m on my way to attend IDE 2019 Summer School at Southern Italy. The theme of the IDE 2019 is...

Blogroll

To the top