A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

When Deepfakes Meet PLM: Using Feedback Loops to Optimize the Requirements BOM

When Deepfakes Meet PLM: Using Feedback Loops to Optimize the Requirements BOM
Oleg
Oleg
31 August, 2025 | 8 min for reading

Earlier this week, I came across news about Taylor Swift’s deepfake images and Sydney Sweeney’s jeans campaign that used AI-generated content. While the internet was busy debating authenticity of Taylor Swift images and arguing about “jeans” controversy , what caught my attention was something different. It is a growing and legitimate number of AI generated marketing. AI-generated images and videos are no longer just about faking reality; they’re about creating content, testing reactions, and feeding results back into an optimization loop. 

That made me think: maybe “fake” can be useful for PLM and specifically requirement management? Can I deepfake a requirement BOM? In fact, in the right context, fake can be useful, especially when it helps us test, learn, and refine ideas before they’re ever real.

When Fake Becomes Useful

Deepfake technology has recently captured headlines, but it is also known as a “synthetic media” for years. While deepfake viral videos of famous people often painted as a threat because it can generate content that is nearly indistinguishable from reality, the ideas of synthetic media are useful and used in different fields. If we put aside the deepfake controversy, it also demonstrates a powerful principle: the ability to synthesize an artifact that is “good enough” to test an idea, explore a reaction, and measure results.

Marketing teams have already discovered this. They now operate on a rapid feedback loop: produce synthetic or AI-generated content, publish it across multiple channels, track engagement, and feed the results back into the system to generate improved content. The cycle repeats — faster, cheaper, and at scale. This process is no longer about a single “perfect” ad; it’s about continuous iteration.

This got me thinking: what if we applied the same principle to product lifecycle management (PLM)? Specifically, what if requirements could be treated as a kind of deepfake product — a digital placeholder realistic enough to gather customer feedback, and flexible enough to optimize features before a single line of CAD geometry is drawn or a prototype is built?

That’s the twist I want to explore: using a Requirements BOM as the foundation for a new type of feedback loop — one that allows us to optimize product requirements with the same agility that marketing teams optimize digital ads.

Marketing’s Deepfake Acceleration

Let’s first unpack what marketing is doing. For decades, marketing was about expensive campaigns, creative directors, and long production cycles. You created a television ad or a glossy print spread, spent months refining it, and then crossed your fingers once it hit the public. Feedback took weeks or months, and by then it was too late to change.

Today, digital marketing works differently. Synthetic content — sometimes AI-generated, sometimes data-driven, can be produced in minutes. Ads are launched, tested across small segments, and measured instantly. Customer engagement data flows back into the system: clicks, views, likes, shares, comments. Based on this data, content is tweaked, republished, and measured again.

This is a closed feedback loop that runs at high velocity. The faster the loop, the better the optimization. And the results speak for themselves: marketers now rely less on intuition and more on iteration. They don’t need to guess what will resonate –  they let the loop tell them.

PLM and Requirement Management

Now contrast this with PLM and product requirements. Traditionally, requirements are collected at the beginning of a project. They are structured in documents, spreadsheets, or databases. Engineers interpret them, design products, and eventually prototypes are tested. Customer feedback comes late in the process, often at the prototype or beta testing stage.

The problem is obvious: by the time feedback arrives, many decisions are already locked in. Changing requirements late in the game is costly, disruptive, and often resisted. Requirements, once written, become static commandments rather than dynamic hypotheses.

This is where the deepfake analogy becomes powerful. What if we could create a synthetic product — a realistic representation generated from the requirements themselves — and push it into customer channels early? What if customers could react to the idea of the product, not the finished prototype?

Requirements BOM as the “Deepfake Generator”

In PLM, we often talk about a Bill of Materials (BOM). A BOM is not just a list of parts; it’s a structured model of the product. Similarly, a Requirements BOM is a structured model of what the product should do.

Think of it this way:

  • Traditional BOM = parts, assemblies, materials, geometry.
  • Requirements BOM = functions, features, performance goals, constraints.

If we treat the Requirements BOM as input data, we can generate a synthetic representation of the product. This “deepfake product” doesn’t need to physically exist. It could be a digital twin, a rendered experience, a simulation, or even a simple AI-generated concept visualization. The point is that it’s realistic enough for customers to react to.

For example:

  • A new consumer gadget could be “deepfaked” into lifestyle images or interactive demos based on its requirements.
  • A new car interior concept could be “deepfaked” into VR experiences for customers to explore.
  • A new piece of industrial equipment could be “deepfaked” into 3D configurators that let customers preview functionality.

The magic here is not the fidelity of the digital fake. It’s the loop — getting real customer reactions early, measuring what resonates, and feeding that insight back into requirements.

Closing the Feedback Loop: Borrowing from Marketing

Imagine a system that can easy capture requirements into a requirement BOM in the way that can be translated into a deepfake product with a feedback loop. Imagine this cycle:

  1. Requirements BOM Created — A structured set of product requirements is modeled.
  2. Deepfake Product Generated — Synthetic representations are created directly from requirements.
  3. Marketing Channels Tested — The fake product is shown to potential customers, either through ads, surveys, or interactive platforms.
  4. Feedback Collected — Customer reactions (clicks, signups, preferences, comments) are measured.
  5. Requirements Optimized — The Requirements BOM is updated to reflect what customers actually want.

This is exactly how marketing optimizes content — and PLM can adopt the same playbook. Instead of waiting for late-stage prototypes, we build agility at the requirements stage.

Why This Matters for PLM

The implications are huge.

  • Reduced Risk of Failure: Many products fail because they miss customer expectations. Early feedback reduces that risk.
  • Faster Iteration: Requirements can be tested and adjusted without expensive prototypes.
  • Customer-Driven Roadmaps: Instead of guessing what features matter, companies can prioritize based on real data.
  • Bridging Marketing and Engineering: Requirements become a shared artifact between customer insight and product design.
  • Enabling Digital Thread: This process extends the digital thread backward — from physical product design into synthetic requirement testing.

In other words, PLM stops being a one-way street and becomes a two-way conversation with the market.

Challenges and Questions

Of course, this raises challenges.

  • Fidelity vs. Honesty: How realistic should the “deepfake product” be? Is there a risk of over-promising?
  • Data Ownership: Who owns the feedback data? Marketing? Engineering? PLM?
  • Process Integration: How do we integrate feedback loops into existing PLM workflows?
  • Ethics and Transparency: How do we communicate that customers are reacting to synthetic concepts?

These are important questions, but they are solvable. Just as marketing teams figured out how to ethically and effectively use A/B testing, product teams can figure out how to responsibly test requirement-based deepfakes.

The Future: Requirements as Hypotheses

The biggest mental shift is this: requirements are not sacred documents. They are hypotheses. A Requirements BOM is not a static contract; it’s a living system that evolves based on evidence.

By using deepfake-like representations and marketing feedback loops, we can treat requirements as dynamic experiments. Each requirement can be tested, validated, or rejected based on real market data. This turns PLM into a scientific process: propose, test, measure, refine.

In the future, I see PLM systems equipped with built-in capabilities to generate synthetic product experiences directly from requirements. Requirements could be published to customer testing platforms, feedback automatically captured, and analytics fed directly into requirement management dashboards.

This is not science fiction. It’s the logical next step in the evolution of digital thread and product intelligence.

What is my conclusion? 

How to go from a deep faked product to the real value? Deepfake technology shows us the power of indistinguishable synthesis. Marketing teams prove how effective feedback loops can be when content is continuously tested and optimized. PLM has the opportunity to borrow this playbook and apply it to requirements.

A Requirements BOM can become the foundation for synthetic product experiences — deepfakes of products that don’t yet exist. The foundation is a structure requirement created in  BOM that is used as a context for deepfake product generation. By pushing these “requirement BOM” deep fake view into the market early, companies can collect feedback, optimize requirements, and design products that are more aligned with customer expectations.

In short:

  • Deepfake shows us the art of synthesis.
  • Marketing shows us the power of the loop.
  • PLM can combine both to change the way we define and optimize requirements.

The result? Products that start with customer feedback at the earliest stage  and requirements that evolve as living, data-driven artifacts, not static documents.

Just  my thoughts… 

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a digital-thread platform providing cloud-native collaborative and integration services between engineering tools including PDM, PLM, and ERP capabilities. With extensive experience in federated CAD-PDM and PLM architecture, I’m advocates for agile, open product models and cloud technologies in manufacturing. My opinion can be unintentionally biased.

Recent Posts

Also on BeyondPLM

4 6
26 August, 2021

Data is one of the most precious assets in the modern business world. Manufacturing companies are sitting on piles of...

27 October, 2016

Things are changing in the manufacturing world. A decade ago, the biggest concern for manufacturing companies was how to manufacture...

14 December, 2010

Ask people about the connection between CAD and PLM and you will discover a very interesting thing. In the past...

17 June, 2010

I had chance to read Buzz message thread about ECO management, initiated by Josh Mings. I found it worth reading....

16 June, 2014

To connect relevant pieces of information is one of the long time dreams developed by many PLM vendors. The pieces...

29 December, 2011

PLM is a costly piece of software. Software licenses, installation, implementation, support, services. All these components of PLM software make...

2 June, 2017

Existing data management paradigms were formed for the last 20-30 years of software development in CAD, CAM, CAE, PDM, PLM...

7 May, 2023

Earlier this week, I attended PLM Roadmap, a CIMdata event. I gave it a preview in my earlier article including...

4 October, 2018

Changes are coming to CAD industry and you can feel it. I attended Develop3D Live Boston event earlier this week...

Blogroll

To the top