Last year, my attention was caught by CIMdata article – IBM Forms New Watson Group to Meet Growing Demand for Cognitive Innovations. The interesting for cognitive computing is growing these days and you can get tons of interesting materials about that on IBM Watson website.
Cognitive computing is the simulation of human thought processes in a computerized model. Cognitive computing involves self-learning systems that use data mining, pattern recognition and natural language processing to mimic the way the human brain works.
The following passage from CIMdata article caught my attention:
IBM Watson Analytics allows users to explore Big Data insights through visual representations, without the need for advanced analytics training. The service removes common impediments in the data discovery process, enabling business users to quickly and independently uncover new insights in their data. Guided by sophisticated analytics and a natural language interface, Watson Analytics automatically prepares the data, surfaces the most important relationships and presents the results in an easy to interpret interactive visual format.
Data discovery is a tricky topic. As I mentioned in my earlier blog last week – PLM cannot drain product data swamps. The problem of PLM is in fact related to limitations of data modeling and ability to capture large scales of organizational data. In a long run it limits ability to create an environment for product innovation. So, maybe IBM Watson is here to help?
Over the weekend, my attention was caught by The Platform article “The Real Trouble With Cognitive Computing” and the troubles IBM has trying to figure out what they are going to do with the Watson supercomputer. The article explains that IBM folks came up 8,000 potential experiments for Watson to do, but only 20 percent of them.
The discussion about single information model in Watson is something PLM folks can benefit when thinking about future of PLM platformization. Here is my favorite passage about Watson:
“The non-messy way to develop would be to create one big knowledge model, as with the semantic web, and have a neat way to query it,” Pesenti tells The Platform. “But that would not be flexible enough and not provide enough coverage. So we’re left with the messy way. Instead of taking data and structuring it in one place, it’s a matter of keeping data sources as they are—there is no silver bullet algorithm to use in this case either. All has to be combined, from natural language processing, machine learning, knowledge representation. And then meshed as some kind of distributed infrastructure.”
What is my conclusion? The odds are Watson won’t be a pragmatic technology for PLM vendors to rely on and build a future of PLM platform innovation. However, the giant knowledge model Watson failed to build can be an alert for PLM architects trying to create a holistic model of the future PLM platforms. It might not work… The reality is much messy than you think. This is a note to folks taking strategic decisions and PLM innovators. Just my thoughts…
picture credit IBM Watson