ChatGPT and Generative AI: A Potential Replay of CAD/PLM’s Data Locking Scenarios?

ChatGPT and Generative AI: A Potential Replay of CAD/PLM’s Data Locking Scenarios?

ChatGPT and similar tools have turned heads in several sectors, opening up new avenues of productivity, creativity, and accessibility. However, recent decisions by Apple and other companies to restrict their employees’ use of tools like ChatGPT and GitHub Copilot are raising interesting questions about the technology’s future. The main reason for this move is concerns about IP leaks. Although it is not widely known, systems like ChatGPT are capable to absorb prompts (if you’re not familiar, prompts are what you write to tools like ChatGPT), and by doing so training the system. The information about this use case was published by Samsung:

The Economist Korea reported three separate instances of Samsung employees unintentionally leaking sensitive information to ChatGPT. In one instance, an employee pasted confidential source code into the chat to check for errors. Another employee shared code with ChatGPT and “requested code optimization.” A third, shared a recording of a meeting to convert into notes for a presentation. That information is now out in the wild for ChatGPT to feed on.

The ban and, most importantly, the situation around ChatGPT restrictions raise interesting questions. It is clearly a potential area of concern for any manufacturing and engineering team potentially using the system. Moreover, it raises questions about how AI tools including generative AI, generative design and many others will be used in the realm of product lifecycle management (PLM) and computer-aided design (CAD).

Restriction Wave

Apple’s decision, alongside other corporate moves, signals a growing apprehension surrounding the use of AI tools like ChatGPT. The primary worry is the potential leakage of proprietary intellectual property (IP). These tools, as they interact with an organization’s data, learn and adapt, inadvertently retaining some level of knowledge that, if misused, could compromise sensitive information.

The PLM and CAD Conundrum

For many years, interoperability and data exchange were one of the biggest problems in the CAD and later PLM industry. CAD models, drawings and later other data are famously hard to exchange. Proprietary formats, private database schemas, and data sets – these are only a few examples of how data locking mechanism works in the CAD/PLM industry. While CAD data exchange has significantly improved in terms of transformational capacity, more complex product data such as Bill of Materials (BOMs), configurations, design intent, and change history remain locked within proprietary “PLM databases formats” developed by vendors.

This inability to share and access critical information across platforms hinders efficient collaboration. However, you can see these elements as a way to protect data from becoming available and leaking outside of the organization. Over the last decade, we’ve seen widely spread debates about the security of cloud CAD/PLM tools and the potential of leaking information to cloud PLM providers. The discussion was claiming down as companies developed security protocols and best practices for handling confidential information. The usage of LLM (large language models) and tools like ChatGPT brings a new stream of concerns.

Balancing Act Between AI Tools and Data Security

It appears there is a conflict of needs: on one hand, the requirement for data confidentiality, and on the other, the need to use the data for AI tools. Companies are looking to leverage AI to streamline operations and use the intelligence embedded into AI models, but not at the expense of security and exposure of the data to these models.

Digital Transformation

Manufacturing companies are standing in the middle of the fourth industrial revolution which appears to be the “informational revolution”. There’s an ongoing digital transformation in the sector, with companies seeking to implement a ‘digital thread’ that seamlessly integrates all stages of the product life cycle. The question then is whether AI tools like ChatGPT can be part of this transformation without breaching trust or compromising security. Restriction by Apple and other leading companies to use ChatGPT reminded restrictions to use social media tools. But the impact of GPT models to absorb knowledge and information is extremely powerful, which raises concerns for many organizations.

Bridging the Gap

The GPT tool ban is an interesting trigger. Because it brings up the question of the separation of tools and technologies. Remember 10 years old debates about the ‘usage of Facebook in airplane design’. It was quite naive back in the 2010s, but a decade later, we can see a paradigm of social networking and collaboration implemented in all leading PLM applications.

New methodologies that blend AI models like GPT with other approaches such as Knowledge Graphs are emerging. These models and techniques demonstrate how to enrich LLMs such as GPT without including prompt data and how to blend structured queries (based on Knowledge Graph and other technologies) into results produced by GPT models.

Also, disclosing by GPT tools the fact of prompt saving can become part of security and best practices when working with tools like ChatGPT. These methods do not transfer the information back to the GPT model, thereby maintaining the integrity and confidentiality of proprietary data. Such combinations could potentially allow for the robust sharing of knowledge without risking IP security.

What is my conclusion?

ChatGPT and AI technologies have demonstrated powerful capabilities in knowledge acquisition, leading to both excitement and apprehension among companies. The saying “What happens in Vegas, stays in Vegas” seems to be increasingly applicable to AI tools: organizations want the benefits of AI’s learning capabilities, but they want the learned information to stay within their own walls.

In the next few years, we’re likely to see innovative ways to process knowledge and build intelligence, balancing the demands for open collaboration, confidentiality, and effective lifecycle management. This balance could see AI, PLM, and CAD working harmoniously together. For now, we watch, learn, and anticipate the next big step in AI-powered productivity. Just my thoughts…

Best, Oleg

Disclaimer: I’m co-founder and CEO of OpenBOM developing a digital thread platform including PDM/PLM and ERP capabilities that manages product data and connects manufacturers, construction companies, and their supply chain networksMy opinion can be unintentionally biased.


Share This Post