Why PLM need a better search for IoT data?

Why PLM need a better search for IoT data?

thing-search-4

I’m learning a lot these days about IoT. The amount of connected devices around us is growing and it raises lot of questions – how to store data produced by devices, what is the value of the information captured from variety of sensors and machines as well as how to make sense of this information. Google is part of Oxford dictionary. The last two decades of web taught us that good search is one of the fundamental elements of getting well with large amount of data.

My attention caught by GigaOM article Thingful upgrades its search engine for the internet of things. Thingful – a company that trying to make IoT searchable exactly in the way Google allows us to search web today. You can navigate to Thingful website and check by yourself.

Below you can see some of my experiments with browsing and searching things in Boston, MA as well as in my neighborhood.

thing-search-1

thing-search-2

thing-search-3

So, why PLM vendors should care? Search is clearly part of modern user experience in every system. IoT brings many new challenges such as gigantic data corpus, frequency of updates and new user experience challenges. It will probably requires more analytic approach rather than just “search experience”. I can imagine designer ability to research existing product behavior or requirements coming directly from products in the field.

In my previous posts I discussed the potential IoT data will blow up traditional PLM databases. The ability to search for a huge IoT data corpus is part of that. However, one of the biggest opportunity in the intersection of IoT and PLM is to extend product lifecycle management from engineering and manufacturing stages into the field of physical products.

What is my conclusion? We are getting in a very interesting phase when PLM has a potential to connect physical objects with virtual models. It changes the way we manage product lifecycle by extending reach and ability to handle more data. It will require better data management and search technologies. Just my thoughts…

Best, Oleg

Share

Share This Post

  • Hi Oleg,
    This is a fascinating topic-I’m surprised that there hasn’t been a ton of commentary thus far from our colleagues on this one…it certainly is a topic that stands out as a business opportunity.

    I’m not certain that I see PLM as the primary repository for the “big data” that can and will result from IoT. There is the potential for so much data that the mountain of information available becomes overwhelming, unmanageable, and consequently, dangerous. By “dangerous”, I mean that not all data is essential, useful or required. If you are flooded with data, huge amounts of time and effort can be expended attempting to shovel the mountain, resulting in negative consequences across an enterprise.

    Therefore, I think that not all data is “good” data-depending on how you consume and use the information, there’s a lot of stuff out there that is simply noise to the data consumer. So, how do we effectively channel what matters-and ignore the rest? Using a library analogy, there’s all these books on the shelves (the data), but you need two things to make the library experience work: 1) A system for locating the desired books/magazines/periodicals etc. based on key attributes (a search tool with appropriate search parameters), and, 2) the explicit need for the relevant stuff-and not the noise. (For most of us, the library experience is driven by a specific need and not some random “let’s see what they have” approach.)

    At the point that we can identify, amass, analyze and assemble the data that we truly need to consume (respective of our individual roles in the enterprise or organization), only then should we interface/integrate this data with other existing business systems, i.e., PLM. In my opinion, it seems that there needs to be some method/tool that fills the gap between big data and these other business systems. I bet someone’s already working on this…

  • beyondplm

    Chris, thanks for sharing insight. I agree – the topic requires attention and closer look. It is not clear how IoT data will be used – I can see variety of scenarios and opportunities. What is clear is that data volume and velocity is huge and make sense of that data won’t be easy. Thanks again for your support and comments! Best, Oleg

  • What will engineers do with all the data that flows back from devices? Even today, product development organizations are overwhelmed with data. That is why they want good search tools, to efficiently plow through the data.

    Using data from the devices they design, Engineers can develop deep insights about their products. The challenge is developing the insights from the onslaught of data. Search is not the answer. To search and find something, a person needs a hypothesis about what to look for. What if the Engineers don’t have starting hypotheses?

    Applying big data techniques can find correlations in data without knowing exactly what to look for. This means the data will show the insights without the needs for a hypothesis.

    And yes Chris, someone is working on this… 🙂

  • beyondplm

    Dana, you asked a very good question – What engineering do with all the data that flows back from devices? I think we need to differentiate between “dumb data” and “smart data”. What you call insight and/or big data tech is probably the answer. I think, we need to get down from buzzwords like big data and speak about analytics. Think about “Google analytics” that helps you to understand what happens with your website. The same is here – analytics that helps engineers to understand what happens with their products.

  • There are probably lessons to be learned at the cutting edge of big-data handling at the Large Hadron Collider (LHC), where sorting through enormous amounts of noise to find relevant data has been a special challenge and pushes our current limits of storage, transport, and retrieval.

  • Hi Ed,
    I can only imagine the volume of data generated by LHC. You raise excellent points regarding the challenge of storing, transporting and retrieving the data. Data complexity is driven by many factors-with the sheer quantity of data leading the way. Finding that “needle in the haystack” becomes a true challenge when we have multiple haystacks, and the haystacks are not identical. To Dana’s point also-we may not have a precise knowledge and view of the data and are looking for threads-we need tools that can identify what we cannot see, and make this information available to consumers.