‘Fast data’ and product lifecycle management in distributed environments

‘Fast data’ and product lifecycle management in distributed environments

plm-fast-data

The amount of digital data is growing at an exponential rate.  We are doubling data every two year. The rate at which we’re generating data is rapidly outpacing our ability to process and handle it. Manufacturing companies are capturing data from so many places these days – design and engineering sources, information about suppliers, manufacturing environment and physical products in the field.

Big data is certainly one of the technologies that aiming to help companies to process and analyze data. However, one of the challenges of such approach is speed. Entrepreneur article – ‘Big Data’ Is No Longer Enough: It’s Now All About ‘Fast Data’  speaks about the problem of speed in data processing. Here is an interesting passage:

On Google alone, users perform more than 40,000 search queries every second. But when every second — or millisecond — can lead to mountains of lost data, each business needs a dedicated platform to capture and analyze data at these increasingly rapid speeds.

John Deere is a company taking full advantage of these models. All new John Deere tractors come equipped with sensors that both inform new product offerings and serve as a benefit to customers.

The data provides insights into the exact use of the equipment, while the technology helps diagnose and predict breakdowns. That means better products and better customer service. For consumers, the sensors offer access to data on where and when to plant crops, the best patterns for plowing, etc. It’s become an entirely new revenue stream for an old company.

It made me think about challenge of data processing, synchronization and handover in manufacturing companies. Product lifecycle is distributed these days. Engineers are located everywhere, supply chain is global, manufacturing facilities are optimized for local product manufacturing and delivery models.

A traditional PLM and other data management environment were designed for large manufacturing companies located in one or maybe few more locations. It largely following master-slave synchronization model of data. In many scenarios the underlying mechanisms of data processing is functioning on 24-hours sync cycle or following manual data export/import.

Distributed environment and large volumes of data are introducing new challenges for PLM technologies. Most of these technologies are coming from the age of RDBMS and single database servers.

What is my conclusion? Manufacturing environment is changing these days. The changes are coming from distribution of everything (environment, processes, people, manufacturing locations) combined with an increased amount of data generated by people and devices. To handle processes and data handover in such type of environment will require new technologies and tools.This is a note for PLM system architects and technologists. Just my thoughts…

Best, Oleg

Want to learn more about PLM? Check out my new PLM Book website.

Disclaimer: I’m co-founder and CEO of openBoM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain.

Share

Share This Post