A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

How Edge computing can help to manage distributed PLM processes

How Edge computing can help to manage distributed PLM processes
Oleg
Oleg
24 August, 2016 | 4 min for reading

edge-computing

“Single version of truth” is one of the most popular paradigm developed by PLM vendors for the last 10-15 years. The concept is loved by many companies and it has many powerful points. At the same time, the devil is in details and wrong implementation strategy can diminish the value of PLM single version of truth. One of the biggest challenges PLM implementation might have is related to distribution of manufacturing companies across the globe and distribution of product development and manufacturing processes. Replication of data is costly and not always possible. Most of existing PLM architectures are relying on variety of data synchronization cycles. A question how to manage distributed product development processes is one that on the table for many IT and PLM system managers.

Forbes article Will analytics on the edge be the future of big data brings an interesting perspective of “Edge computing”. If you haven’t heard about that, take a look on another article – What is Edge Computing by TechTarget for additional explanations.

In a nutshell, the idea of edge computing is simple – process data near the data source and optimize traffic by splitting data transfer between the data that needs to be synchronized immediately and information that will be transferred later for different purposes such as archiving and record keeping. The following passage can give some examples how edge computing can be applied to IoT domain.

Sometimes known as distributed analytics, it basically means designing systems where analytics is performed at the point where (or very close to where) the data is collected. Often, this is where action based on the insights provided by the data is most needed. Rather than designing centralized systems where all the data is sent back to your data warehouse in a raw state, where it has to be cleaned and analyzed before being of any value, why not do everything at the “edge” of the system?

A simple example would be a massive scale CCTV security system, with perhaps thousands or tens of thousands of cameras covering a large area. It’s likely that 99.9% of the footage captured by the cameras will be of no use for the job it’s supposed to be doing – e.g. detecting intruders. Hours and hours of still footage is likely to be captured for every second of useful video. So what’s the point of all of that data being streamed in real-time across your network, generating expense as well as possible compliance burdens?

Wouldn’t it be better if the images themselves could be analyzed within the cameras at the moment it is captured, and anything found to be useless either discarded or marked as low priority, freeing up centralized resources to work on data of actual value?

It made me think about PLM and distributed processes management. Modern process management for product development, manufacturing and supply chain can apply similar principles to enable data proliferation and distributed data processing. Think about heavy CAD file located on a desktop. Fast data processing can be applied to extract valuable data needed for decision making while rest of data will be delivered later. Similar to that manufacturing process analytic and supply chain optimization can taken faster by recombining fast processed data across the globe. While long synchronization of data can take time, fast processing of data on edges of distributed system can improve decision process and enable processes that took 24 hours cycle before.

What is my conclusion? We are coming to the era of new version of “single point of truth”. The truth is distributed and processed over the time. Companies are going to discover new data processing paradigms relying on availability of powerful computing devices everywhere and network computing. Existing PLM platforms are mostly database driven and relying on centralized data processing. New concepts of parallel data processing can combine edge computing with asynchronous data processing and analysis to deliver better process and decision management to manufacturing companies. Just my thoughts…

Best, Oleg

Want to learn more about PLM? Check out my new PLM Book website.

Disclaimer: I’m co-founder and CEO of openBoM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.

Image credit Forbes article 

Recent Posts

Also on BeyondPLM

4 6
6 February, 2015

Cloud computing is changing a world around us. CAD and engineering applications are not exception. It has been already five...

11 October, 2019

Yesterday evening I attended the first Onshape Boston User Group meeting. The meeting was organized in the Onshape office in...

12 June, 2012

Have you heard about BYOD (Bring Your Own Device) trend? Together with some other “consumerization” trends it shows the power...

27 February, 2015

Thoughts after PI Congress in Dusseldorf… Earlier this week, I attended PI Congress in Dusseldorf. For me, it was an interesting...

28 October, 2014

The complexity of manufacturing is skyrocketing these days. It sounds reasonable for many of us when it comes to spaceships,...

14 October, 2016

Engineering.com webinar with fascinating name – Not your father’s PLM by Chad Jackson of Lifecycle Insight reminded me the textbook example...

12 August, 2020

A little bit more than one month ago, two giant companies, Siemens and SAP announced their partnership to deliver integrated...

2 June, 2009

I think that I’m not exaggerating by saying that every conversation about “collaboration” last week started from  the Google Wave...

2 September, 2014

Cloud storage is an interesting place these days. In my article CAD companies and cloud storage strategy few weeks ago, I...

Blogroll

To the top