A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

Will PLM crunch untapped data in manufacturing organizations?

Will PLM crunch untapped data in manufacturing organizations?
Oleg
Oleg
24 January, 2013 | 2 min for reading

Do you remember the golden era of desktop searches? I remember first time I had a chance to run Google Desktop on my computer. The most inspiring moment was to see documents and emails that you completely forgot about. Today, desktop search solutions are not as popular as before. Our personal digital life moved to the cloud. Application search, such as Outlook search and others improved significantly (thanks to open search solutions reused by many vendors). The focus of “data crunch” moved from a single desktop solution to cloud and mobile devices. Despite a huge promise of enterprise search solutions, majority of them are experiencing difficulties to provide efficient, reliable and cost-effective solution that can help to organization to capture and search trough massive amount of digital data. Focused search solutions are more efficient and we can see them coming from enterprise software vendors.

However, it doesn’t solve the problem of huge amount of existing data in organizations. I’ve been reading Crowdshifter article Behold The Untapped Big Data Gap. It shows some data coming from IDC study. Here is an interesting quote:

…article reported that 23% of data within the digital universe of 2012 could be useful for big data collection and analysis purposes if tagged. However, there is a huge gap in the amount that has been tagged versus the amount that remains without semantic enrichment. Only 3% has been tagged and only .5% has been analyzed.

Source: IDC/EMC.

Manufacturing organizations are desperately looking how to improve their decision management process. To leverage the existing data in an organization can be an interesting approach. I can bring many examples from PLM space where data about change management history, maintenance, suppliers, etc. can help to make a better decisions. For the moment, the majority of the information stored in application silos and cannot be used in an easy way. This data can easy become digital garbage similar to last year papers on your desktop and similar to old documents and email on your desktop before desktop search era.

What is my conclusion? To analyze data is a tough job. It requires computing resources, time, investment and smart algorithms. Google laundry list of results won’t be helpful. The new methods of data crunching and data discovery need to be developed. With only .5% of data analyzed and 3% of data tagged, we have a huge potential to tap in. Just my thoughts…

Best, Oleg

Recent Posts

Also on BeyondPLM

4 6
9 December, 2008

Mashup technologies started first as web applications that combined data from multiple sources into single integrated tool. In many cases...

9 March, 2018

One of the advantages of living in Boston area is to be surrounded by CAD and PLM companies. Long time...

11 November, 2018

PLM space is active these days. The activities are also for large companies and small ones. Just few months ago,...

18 October, 2012

Open Source Software (OSS) is a wonderful thing. For the last decade, open source changed the world of software development....

30 December, 2009

If you haven’t had chance to see Microsoft Live Lab Pivot project, please do. I found this approach as somewhat...

29 May, 2009

Watch this presentation. Do you see it as future PLM collaboration paradigm? Stay tuned for my future posts about how...

24 June, 2013

The two terms “cloud computing” and “virtualization” have many things in common. From a specific technological standpoint, we can even...

18 July, 2021

How big is the future SaaS/cloud PLM market? As much as this number sounds simple, the answer is not easy...

11 August, 2009

Following my previous post about how PLM can go to mainstream, I had chance to discuss this topic with some...

Blogroll

To the top