Blog··manufacturing · IMDS · REACH

The IMDS compliance gap in every shop-floor copilot demo

Why generic chatbots cannot safely answer materials-compliance questions on the assembly line, and the production pattern that ships.

By The Nuviax team

An operator on a European assembly line has a question. A supplier has substituted a component at the last minute. The part fits, the price is right, and the production schedule does not have time to wait. Before the part goes on the line, one question needs an answer: does the new component carry the same IMDS declaration, or has something in the material composition changed that will show up in an audit two years from now.

The AI demo everyone saw last quarter answers this question in two seconds. The AI deployment that can actually ship on the line does not exist yet at most OEMs. The gap between the demo and the deployment is the entire problem.

What IMDS actually requires

The International Material Data System is the industry-accepted repository for materials declarations in the automotive supply chain. Every component that goes into a vehicle carries an IMDS declaration covering the material composition, substance percentages, and regulated-substance disclosures required by EU End-of-Life Vehicle regulation, US Toxic Substances Control Act, and the regional equivalents in every market the OEM ships to.

IMDS is not optional. The OEM's compliance team uses IMDS declarations to answer the questions a regulator will ask during a product audit, a recall analysis, or a new-market entry. The declarations flow from supplier to tier-one to OEM. When a supplier substitutes a component, the IMDS declaration for the new part either exists in the repository or it does not. If it does, the OEM can trace the substance profile. If it does not, the OEM does not know what is on the assembly line.

REACH adds another layer. The EU regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals requires producers to track Substances of Very High Concern in their supply chain. A chemical that was acceptable in a production process last year may land on the REACH candidate list this year, and every product containing it requires a fresh assessment.

The compliance team is not asking the operator to understand all of this. The compliance team is asking the operator to not make a decision that compromises the audit trail.

Why generic chatbots fail here

A generic chatbot, pointed at the internet or at a general-purpose RAG index, will answer the operator's question. It will answer confidently. It will often answer wrong.

Three failure modes repeat across every manufacturing pilot we have seen.

The first is hallucinated declarations. The chatbot is trained to produce plausible text. If the IMDS declaration for the substituted part does not exist in the retrieval index, the model will fabricate a plausible response anyway. The operator, under production pressure, trusts it. The substituted part goes on the line. Two years later, an audit finds a substance profile inconsistent with the declared bill of materials. The OEM has a recall-scope problem or a regulatory finding it cannot trace.

The second is stale regulatory context. REACH candidate lists update twice a year. IMDS data standards update on their own cycle. A chatbot grounded in last quarter's training data will answer with rules that no longer apply. The compliance team finds the drift during the next internal audit, not during the next conversation with the operator.

The third is wrong point of use. Compliance-specialist tools do exist. They live in the compliance team's dashboards, require training to use, and are not available at the shop-floor interface the operator actually uses. The information is right; the interaction is wrong. Operators under production pressure do not leave the line to consult a specialist dashboard. They ask whoever is nearby, or they guess.

All three failures produce the same outcome: the operator makes a decision the audit cannot defend, and the compliance team finds out too late.

The production pattern that ships

The copilot the operator can actually use embeds inside the shop-floor interface that is already in front of them. The manufacturing execution system, the PLM terminal, the tablet station attached to the line. Wherever the operator's eyes already go.

The copilot does not answer from general knowledge. It retrieves from the authoritative sources. PLM for part geometry and bill of materials. IMDS for material composition. The supplier portal for the declaration the supplier submitted. A live regulatory database for the current REACH candidate list and substance restrictions. Every response includes citations to the specific records that justified the answer.

When the retrieval turns up an IMDS declaration for the substituted part, the copilot returns a structured answer: the part has a declaration, the material composition matches within a specified tolerance, no SVHC substances are present, the operator is clear to proceed. The citation links back to the IMDS record ID so the supervisor or the compliance team can verify in two clicks.

When the retrieval does not turn up a declaration, the copilot does not guess. It returns an escalation: the part does not have an IMDS declaration on file with the expected version, this needs compliance review before use, here is the supplier contact and the specific missing artifact. The copilot does not decide. It routes.

When the regulatory context has shifted, the copilot says so. If a substance in the current material composition has landed on the REACH candidate list since the last compliance review, the operator sees the flag before the part goes on the line, not two quarters later during the next internal audit.

What operational sign-off looks like

The compliance team does not sign off on the model. They sign off on the retrieval boundary. The sources the copilot is allowed to retrieve from, the freshness requirements on each source, the escalation paths when a source is unavailable, the policy that governs when the copilot answers versus when it defers.

The operator does not trust the model. They trust the citation. Every answer carries a link back to a verifiable record. When a supervisor asks "why did the line run that part," the answer is a copilot interaction ID and an audit trail of retrieved sources, not "the AI said it was fine."

The plant manager does not measure chatbot accuracy. They measure escalation rate, time to answer, compliance-team query volume, and audit-finding frequency on materials composition. These are the metrics that existed before the copilot. The copilot either moves them in the right direction or it does not ship on that line.

The broader pattern

This is not a manufacturing-specific problem. The same structure shows up across every regulated industry. The shop-floor operator is the FNOL adjuster in insurance, the clinician ordering prior authorization, the trader asking the trading copilot for a hedge recommendation, the underwriter pricing a commercial risk.

In every case, the generic chatbot looks great in the demo. The production system needs the authoritative source boundary, the citation-backed answer, the explicit escalation path, the freshness monitoring on the retrieval sources, and the measurement framework that the compliance function already uses for non-AI tools.

The separation is not between AI and non-AI. It is between a copilot that lives inside the authority structure and a copilot that lives outside it. The first one ships. The second one demos.

Next step

If you are running an AI pilot on a European assembly line, or preparing one, an architecture review maps your current materials-compliance obligations, your authoritative source inventory, and the three questions operators most often escalate, and produces a findings document your plant manager and compliance team can act on together.

Book an architecture review →

Next step

Want the architecture-review version of this?

Book an architecture review