CIMdata announced new research completion about PLM Status and trends. A quick note about it can be found here. A very short summary of key findings is interesting:
Significant findings show that: (1) Industrial companies continue to struggle to move their PLM implementations beyond traditional product data management (PDM) topics, such as engineering data management, engineering change and change management, configuration management, and workflow management; (2) The value and importance of PLM is still primarily being seen by the engineering function; (3) The pursuit of future topics, such as AI and machine learning, are still more than five years away for most of the survey respondents.
Net-net, PLM stuck in the past PDM controlling engineering function and if you think about future popular trends such as AI and Machine learning, don’t expect PLM minded people in industrial companies to think about it.
As much as it might sound as a bad news for people thinking how to innovate in PLM with machine learning, I thought this is very much explainable. All comes down to a single point – data. PLM businesses are very much focused on “data control”. Traditional PLM architectures and companies are focusing how to control data, files, revisions and not much focusing on how to use data for intelligence. This is where one of the biggest paradigm gaps in PLM value proposition today. Check more here – PLM paradigm shift.
Understanding of engineering and manufacturing data will drive future innovation and development intelligent systems. My attention was caught by Distil article – The Building Blocks of Interpretability – . The article speaks about image recognition and covers Interpretability techniques. Check the article and think how to recognize a specific patterns in bill of materials capable to bring your product design or manufacturing planning into disastrous place. If you have no time to read the article, just go in the following passage about semantic dictionaries:
Semantic dictionaries are powerful not just because they move away from meaningless indices, but because they express a neural network’s learned abstractions with canonical examples. With image classiﬁcation, the neural network learns a set of visual abstractions and thus images are the most natural symbols to represent them. Were we working with audio, the more natural symbols would most likely be audio clips. This is important because when neurons appear to correspond to human ideas, it is tempting to reduce them to words. Doing so, however, is a lossy operation — even for familiar abstractions, the network may have learned a deeper nuance. For instance, GoogLeNet has multiple ﬂoppy ear detectors that appear to detect slightly different levels of droopiness, length, and surrounding context to the ears. There also may exist abstractions which are visually familiar, yet that we lack good natural language descriptions for: for example, take the particular column of shimmering light where sun hits rippling water. Moreover, the network may learn new abstractions that appear alien to us — here, natural language would fail us entirely! In general, canonical examples are a more natural way to represent the foreign abstractions that neural networks learn than native human language.
By bringing meaning to hidden layers, semantic dictionaries set the stage for our existing interpretability techniques to be composable building blocks. As we shall see, just like their underlying vectors, we can apply dimensionality reduction to them. In other cases, semantic dictionaries allow us to push these techniques further. For example, besides the one-way attribution that we currently perform with the input and output layers, semantic dictionaries allow us to attribute to-and-from specific hidden layers. In principle, this work could have been done without semantic dictionaries but it would have been unclear what the results meant.
Brining semantics into the network struck me a very powerful thing that can allow to get to a more specific abstractions. However, it is still not clear to me how such mechanisms can work with the models beyond images. At the same time, an ability to recognize a specific data pattern in product information can have a very powerful impact on decision making.
What is my conclusion? Can we trust new models to recognize patterns in product data? I don’t have an answer on this today. But the topic seems to be interesting and can drive attention. PLM industry will have to move from traditional engineering oriented data (files) control into data intelligence space to help manufacturing companies to improve their decision making about part selection, contractor selection and many other aspects related to engineering and manufacturing. Just think about ability to recognize bill of materials with the dependency on a single supplier. By itself it can be a big deal for many manufacturing OEMs. Data and intelligence will drive future of PLM development. Just my thoughts.
Want to learn more about PLM? Check out my new PLM Book website.
Disclaimer: I’m co-founder and CEO of OpenBOM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.
Pingback: GHZ Partners | IT Services, Software()