A blog by Oleg Shilovitsky
Information & Comments about Engineering and Manufacturing Software

Re-thinking ECO: Why Form-Fit-Function Decisions Fail Without Product Memory

Re-thinking ECO: Why Form-Fit-Function Decisions Fail Without Product Memory
Oleg
Oleg
17 January, 2026 | 10 min for reading

Form-Fit-Function decisions are often treated as small engineering details part of ECO approval process, the kind of thing that should take minutes to assess before moving on to the “real work.” A simplified version of the process was lead to the conclusion of keeping the revision or creating a new one. But, in fact, this part of the process is much bigger than a simple tolerance tweak, a supplier substitution, a labeling change, a cosmetic adjustment that seems harmless enough when viewed through the lens of a single drawing or CAD model.

And yet, again and again, it is precisely these decisions that quietly cascade into costly rework, compliance failures, recalls, and years of operational complexity that nobody planned for and nobody can easily unwind. Not because engineers are careless, and not because processes are ignored, but because the decision itself was made without access to the full story behind the product.

In many organizations, Form-Fit-Function is treated as a quick judgment call, something that belongs squarely inside engineering and can be resolved by a single person who knows the design well. In reality, it is one of the highest-leverage decision points in product development, sitting at the intersection of engineering intent, manufacturing reality, serviceability, regulatory exposure, and supply chain constraints. When it goes wrong, it rarely fails loudly on day one. It fails quietly, downstream, and at scale.

The Hidden Price Tag of “Seems Fine”

Getting Form-Fit-Function wrong is far more expensive than most companies are willing to admit, partly because the costs rarely show up where the decision was made and partly because they often appear months or years later, long after the original context has faded from memory.

A false negative, declaring a change “FFF-equivalent” when it is not, tends to surface as assembly issues, field failures, warranty claims, or recalls, all of which are orders of magnitude more expensive to fix than addressing the problem during design. Industry benchmarks consistently show that correcting an error in production or in the field costs ten to one hundred times more than fixing it upstream, and a single escaped FFF error can easily reach hundreds of thousands of dollars, or millions in regulated industries where compliance and safety amplify every mistake.

False positives are quieter, but no less damaging. Declaring a change “non-FFF” when it is actually interchangeable leads to unnecessary new part numbers, duplicated bills of material, excess inventory, and long-term ERP and supplier overhead that accumulates year after year. Over time, this creates part proliferation that no one explicitly planned, but everyone has to manage, and for many mid-size manufacturers this silent accumulation adds up to one to three million dollars per year in avoidable lifecycle cost.

What makes this particularly frustrating is that these failures are rarely caused by lack of skill or experience. They happen because Form-Fit-Function decisions are made without a shared understanding of downstream impact, and without a reliable way to remember what was decided before, why it was decided, and under which assumptions the decision made sense at the time.

When FFF Meets Reality: Interchangeability and Traceability

Form-Fit-Function is often treated as the deciding criterion for whether to revise a part or issue a new part number, but experienced practitioners have been pointing out for years that this framing is incomplete. FFF is not wrong, but it is only a subset of a larger decision space that includes interchangeability and traceability, both of which introduce complexity that cannot be captured by geometry alone.

Traceability exposes this limitation immediately. Changing a part from non-serialized to serialized might not alter its geometry or how it fits into an assembly, yet it fundamentally changes how the part behaves across its lifecycle, including how it is tracked, serviced, recalled, upgraded, and regulated. In that sense, traceability is not metadata that can be ignored or added later; it is function, just expressed through business and lifecycle behavior rather than physical form.

If such a change is made after parts have already been ordered or shipped, a new part number is often required, not because the part looks different, but because its business meaning has changed. Without separation, traceability collapses downstream even if CAD models remain identical, and the organization loses the ability to distinguish between units that behave differently in the field.

I found a very insightful article by Martijn Dullaart argued that Form-Fit-Function alone is insufficient, and why his question “Is Form-Fit-Function doomed?” resonated so strongly across the community of people discussing PLM topics.

The problem is not that FFF no longer matters, but that it has been stripped of the broader context in which it was always meant to operate, and forced into workflows that were never designed to handle its real implications.

Approval Workflows Cannot Think for You

At this point, it becomes clear that the issue is not discipline or compliance. Traditional Engineering Change Order systems assume that the right decision will emerge if tasks are routed correctly, approvals are collected, and the correct boxes are checked. But Form-Fit-Function decisions do not fail because someone skipped a step or ignored a process.

They fail because the right people did not share the same understanding at the same time, and because the reasoning behind the decision was never preserved in a way that could be revisited, challenged, or reused.

FFF decisions routinely require input from engineering, manufacturing, quality, supply chain, service, and sometimes regulatory teams, each of which holds a different piece of the truth. No single role sees all dependencies, and no single document captures all implications. Yet we continue to force these decisions through linear, approval-driven workflows that were designed to close tasks, not to support judgment under uncertainty.

This is not an implementation flaw; it is a structural limitation.

Decomposition Creates the Illusion of Control

Patrick Hillberg Ph.D. in his Linkedin article articulated a deeper systems problem explains why these failures keep repeating even in organizations with mature processes and experienced teams. In complex systems, decomposition creates dysfunction when context and memory are lost, because breaking systems into isolated parts and decisions into isolated steps gives the illusion of control while hiding the interactions where real risk lives.

Failures like the GM Ignition Switch were not caused by a single bad decision or a lack of rules, but by fragmented knowledge and the absence of shared system-level understanding. No one person was irresponsible, yet the system as a whole failed to surface the interactions that mattered most.

Form-Fit-Function failures follow the same pattern. When decisions are decomposed into isolated tasks, routed independently, and approved without shared memory, the organization loses the ability to reason about the product as a system, even when every individual step appears to be executed correctly.

The Missing Ingredient: Product Memory

The recurring pattern behind all of this is not lack of tooling, lack of standards, or lack of best practices. It is the absence of product memory.

Product memory is not just data stored in a database. It is the accumulated record of what decisions were made, why they were made, which alternatives were considered and rejected, which assumptions were accepted, and which risks were consciously taken. It is the difference between knowing that a part changed and knowing why it changed, under what conditions the change was acceptable, and where those conditions might no longer hold.

Most PLM systems are excellent at recording outcomes, such as revisions, states, and approvals. Very few are designed to preserve memory. When memory is missing, teams are forced to reconstruct context from emails, spreadsheets, meeting notes, and tribal knowledge, precisely at the moment when high-impact decisions must be made quickly.

When product memory is missing, Form-Fit-Function decisions become guesswork, even when everyone follows the process perfectly.

Why AI Without Memory Makes Things Worse

This is also why attempts to “add AI” to PLM systems often disappoint. Without product memory, AI has nothing meaningful to reason over. It can analyze geometry, compare attributes, and classify changes, but it cannot understand why a decision mattered, which risks were acceptable, or which dependencies were invisible at the time.

I explored this problem in more detail in my earlier Beyond PLM article – when discussing why PLM AI needs product memory, and why approval workflows are fundamentally incompatible with agentic decision support.

Similarly, automating tasks that were never designed to capture reasoning simply accelerates confusion, which is why task re-engineering must come before AI can add real value.

AI cannot fix broken decision environments. It can only amplify whatever structure already exists.

From Approval Workflow to Decision Environment

If Form-Fit-Function decisions are to work as intended, they must be supported by environments that preserve product memory and enable cross-functional reasoning, rather than by workflows that merely route tasks.

This requires a shift in how we think about both BOMs and ECOs.

The bill of material cannot remain a frozen artifact that only changes at release milestones. It must become a sandbox, a place where teams can safely explore alternates, substitutions, and structural changes, test FFF assumptions, and reason about interchangeability and traceability before committing to irreversible outcomes. This is where product memory is built through exploration, not reconstructed after the fact.

Similarly, the ECO cannot remain a routing mechanism whose primary purpose is to move approvals from one inbox to another. It must become a collaborative workspace centered on a shared BOM baseline, where engineering, manufacturing, supply chain, quality, and service work on the same context at the same time, with visibility into dependencies that normally live in different systems. Here, decisions are not just recorded; they are explained and remembered.

This is the direction explored in rethinking ECOs as workspaces rather than workflows, and in moving from approval-centric change processes toward collaborative change exploration.

Where AI Agents Can Actually Help

Once product memory exists, AI agents can finally play a meaningful role. Not as decision oracles, and not as approval engines, but as assistants working inside the same sandbox and workspace as the humans who remain accountable for the outcome.

Agents can surface past Form-Fit-Function decisions, highlight traceability risks, expose hidden dependencies, and recall similar change patterns across products and time. They can help teams ask better questions before committing, rather than attempting to automate judgment after the fact.

Accountability remains human. Judgment remains human. But intelligence becomes cumulative instead of ephemeral, because it is grounded in shared memory rather than isolated analysis.

What is my conclusion? 

Thinking about Form-Fit-Function forced me to reflect on our ability to make complex decisions about product changes without product memory, and on how rarely we create environments where everyone who needs to be involved can reason together using the same data context.

I now see FFF decisions as part of a high-leverage business process, not a simple approval step. When systems record only outcomes and discard reasoning, teams are left guessing under pressure, even when they follow the process perfectly. The result is inconsistency, hidden risk, and avoidable cost.

As we rethink BOM management, change processes and explore how AI can support something as critical as the ECO, the answer is not stricter workflows or faster approvals. The real opportunity lies in collaborative decision environments that preserve and expose product memory, bring cross-functional perspectives together around a shared baseline, and allow intelligent assistance to work alongside human judgment rather than replace it.

Just my thoughts… 

Best, Oleg 

Disclaimer: I’m the co-founder and CEO of OpenBOM, a collaborative digital thread platform that helps engineering and manufacturing teams work with connected, structured product data, increasingly augmented by AI-powered automation and insights.

Recent Posts

Also on BeyondPLM

4 6
26 July, 2020

For those of your long time with PLM and Enterprise Software, I bet you’ve heard about TLA, which stands for...

13 April, 2009

Personalization is one of the today’s trends in product development. More and more manufacturers are discovering the ability to develop...

21 April, 2017

It was almost 20 years ago. I was developing AutoCAD software for customers in Israel. I remember a conversation with a...

19 October, 2011

Few days ago, I was talking about some interesting patterns of PLM and ERP implementation. In a nutshell, the integration...

16 June, 2011

As you probably know, I spent the beginning of the week in Las-Vegas attending Planet PTC Live 2011. Those of...

22 January, 2019

Once upon a time, PLM and cloud things were not friends. PLM companies were telling that cloud is not secured...

16 November, 2009

Reading over the weekend ZDNet post, “Why IT cannot seem to deliver measurable productivity”, I started to think about how...

13 January, 2022

Has your business fully adopted cloud-based CAD? While the technology has been around for more than a decade, the jury...

23 November, 2013

Our demand for the software to be fast. We don’t like to wait until computer will perform all actions and...

Blogroll

To the top