Compliance Deep-Dive

BRCGS Audit: Top 5 Non-Conformities and How to Fix Them

22 min read

The average BRCGS audit finds 4.86 non-conformities. Here are the top 5 failures and how to eliminate them before your next assessment.

The BRCGS 2024–25 Annual Report confirms that certification bodies identified an average of 4.86 non-conformities per audit across all BRCGS Food Safety Standard Issue 9 assessments, with 59% of those findings falling within Section 4: site standards covering everything from facility cleanliness to door maintenance and chemical handling. The top single clause, 4.11.1 (premises and equipment cleanliness), generated 4,715 non-conformities alone.

Those numbers matter because BRCGS certification is a commercial gatekeeper: without it, manufacturers lose shelf space in major UK and global retailers. But the audit failure pattern also reveals something the headline statistics miss: two of the top ten NC categories, record control (Clause 3.3.1, ranked #9) and document control (Clause 3.2.1, ranked #7), are entirely about paperwork integrity. These are the exact gaps that automated compliance documentation eliminates without touching a single ceiling tile or door gasket.

Flux treats the sensor as the input device and the compliance pack as the product, so the six documentation layers. Daily Log, SFBB diary, Excursion Reports, EHO Inspection Pack, CQC supplement, and Energy Intelligence: map directly onto the BRCGS clauses that trip up even well-run factories. This pillar breaks down the top non-conformities, explains what assessors actually look for, and shows how each Flux tier closes specific clause gaps.

Use this alongside the EHO Inspection Checklist, the Food Safety Temperature Monitoring pillar, the SFBB Complete Guide, and the Excursion Register Causality Map so every BRCGS corrective action references the same evidence spine your EHO and CQC packs already use.

In this guide

  1. Why this matters to an EHO
  2. Decode the 4.86 non-conformity benchmark
  3. Top five Section 4 failures: what assessors find most often
  4. Record control (Clause 3.3.1): the documentation NC you can eliminate entirely
  5. Document control (Clause 3.2.1): why version drift generates audit findings
  6. Corrective action and root cause (Clause 2.11): the CAPA gap assessors always probe
  7. Internal audit (Clause 3.4): proving you test your own controls
  8. Temperature monitoring and HACCP CCP evidence (Clauses 2.7–2.9)
  9. Tier the BRCGS compliance story: Shield, Command, and Intelligence

Why this matters to an EHO

EHOs and BRCGS assessors enforce different standards, but they test the same underlying capability: can you prove, with immutable evidence, that food was stored, handled, and documented safely? A BRCGS audit that flags Clause 3.3.1 (record control) is telling you the same thing an EHO means when they mark 'confidence in management' as weak: your records are incomplete, inconsistent, or unverifiable. A real-world cold-chain audit case teardown shows exactly how overnight record gaps trigger multiple non-conformities in a single visit.

The 4.86-NC average is not a number to fear; it is a diagnostic. FSNS data shows 59% of all findings sit in Section 4 (site standards), which means the remaining 41% span management commitment (Section 1), hazard analysis (Section 2), documentation (Section 3), product control (Section 5), process control (Section 6), and personnel (Section 7). Record and document control together account for two of the top ten, making them the highest-leverage fixes available to any site that already passes its hygiene walk.

Implementation checklist

  • Map every Flux record ID to the BRCGS clause it satisfies (3.3.1 for records, 3.2.1 for documents, 2.11 for corrective actions) so assessors can trace compliance without asking for supplementary evidence.
  • Print the 4.86-NC benchmark on your pre-audit briefing so site teams understand the statistical baseline and can target below it.
  • Cross-reference BRCGS findings with FHRS 'confidence in management' scoring so the same corrective action satisfies both regulatory frameworks.
  • Surface AUTO-DETECTED vs STAFF ENTRY tags on every record so assessors can distinguish sensor-generated evidence from manual overrides.
  • Cache the latest 72-hour evidence pack offline so assessors visiting sites with limited connectivity still see the full documentation chain.

Decode the 4.86 non-conformity benchmark

BRCGS publishes audit outcome data annually. The 2024–25 report, covering audits conducted under Issue 9, recorded an average of 4.86 non-conformities per audit. That figure spans initial certifications, recertifications, and announced or unannounced visits. It means even well-prepared sites typically leave an audit with five findings to close, and the clock starts immediately because BRCGS requires corrective action evidence within 28 calendar days for major NCs and at the next surveillance for minor ones.

Critically, the distribution is not uniform. Section 4 (site standards) absorbs 59% of all non-conformities, which tells you that physical infrastructure: cleanliness, building fabric, doors, walls, equipment design: generates the majority of findings. But Section 3 (food safety and quality management system) houses the documentation clauses, and a single record-control NC can cascade: if the assessor finds incomplete temperature logs (3.3.1), they will also check whether the document that defines your monitoring procedure is current (3.2.1), whether corrective actions reference the correct record IDs (2.11), and whether internal audits sampled those records (3.4). One weak record spawns four findings.

Implementation checklist

  • Track your site's NC count per audit and trend it against the 4.86 average over three cycles to demonstrate continuous improvement.
  • Break NCs by section so you can tell whether your exposure is infrastructure (Section 4) or documentation (Section 3) — because the fix strategies are completely different.
  • Set a target of zero documentation NCs (Sections 3.2 and 3.3) as the most achievable quick win since automated records eliminate the root cause entirely.
  • Brief assessors at the opening meeting on your record ID scheme so they understand provenance before sampling individual documents.
  • Log every NC closure inside the same record ID system your Flux compliance pack uses so CAPA evidence is available instantly at the next audit.

Top five Section 4 failures: what assessors find most often

The BRCGS 2024–25 report ranks the top five non-conformity clauses, all from Section 4. Clause 4.11.1 (premises and equipment cleanliness) leads with 4,715 findings: product build-up on equipment, unclean surfaces missed during pre-operation inspections, and dust or contaminants in warehouses. Clause 4.6.2 (equipment design and construction based on risk) follows with 3,322 findings. Clause 4.9.1.1 (chemical handling) accounted for 3,284 findings. Clause 4.4.8 (door maintenance) recorded 3,007 findings. And Clause 4.4.1 (wall finishing) rounded out the top five with 2,933 findings.

These are fundamentally infrastructure and housekeeping issues, and no amount of sensor data will fix a peeling ceiling or a dock door with gaps. But the corrective action response to each finding is where Flux adds value. When an assessor writes up Clause 4.11.1, the next question is always 'Show me your cleaning verification records.' That verification evidence lives inside the SFBB diary (AUTO-DETECTED sensor data confirming post-clean temperature recovery, staff sign-off confirming the visual inspection). When the corrective action, verification, and preventive measure share the same record ID as the temperature log, the assessor closes the finding faster because they can see the entire narrative without cross-referencing binders.

Implementation checklist

  • Link cleaning verification records to the same Flux record ID that captures the temperature data for each zone, so assessors can verify post-clean recovery without separate paperwork.
  • Log equipment maintenance schedules and completion evidence against the asset IDs your Daily Log already uses.
  • Attach chemical concentration titration records as signed artefacts inside the SFBB diary so Clause 4.9.1.1 evidence is part of the compliance pack, not a separate lab folder.
  • Photograph door and wall conditions during internal audits and link the photos to the inspection date record ID for Clause 4.4.8 and 4.4.1 traceability.
  • Create a Section 4 pre-audit sweep checklist that maps each clause to the Flux layer where its evidence lives.

Record control (Clause 3.3.1): the documentation NC you can eliminate entirely

Clause 3.3.1 requires sites to 'maintain genuine records to demonstrate the effective control of product safety, legality and quality.' FSNS ranks it as the #9 most common non-conformity across all BRCGS audits. The typical failure pattern is painfully simple: missing timestamps, blank verification sections, no monitor name, incomplete corrective action logs. These are not competence failures: they are system failures. When a record depends on a human remembering to write four data points on a paper form during a busy shift, gaps are inevitable.

Automated records eliminate this root cause. A sensor that fires every five minutes generates a timestamped, calibration-linked, hash-chained reading with no human input required. The AUTO-DETECTED tag proves the reading is sensor-generated. Staff add Action and Verification notes when an excursion occurs, but the baseline record: the one that satisfies Clause 3.3.1: exists independently of anyone remembering to pick up a pen. Shield tier creates this foundation; Command tier extends it into diaries, excursion reports, and inspection packs that all inherit the same completeness guarantee.

Implementation checklist

  • Audit every paper-based record against Clause 3.3.1 requirements (time, date, monitor name, verification) and flag any form where one or more fields are routinely left blank.
  • Replace paper temperature logs with Shield-tier automated records so the timestamp, sensor ID, calibration reference, and reading value are captured without human intervention.
  • Require Action and Verification fields on every staff-generated record so the same completeness standard applies to manual entries as to automated ones.
  • Run monthly internal audits that specifically sample record completeness and log the results inside Flux so the assessor can see your own governance at the opening meeting.
  • Display record completeness rates on the site dashboard — target 100% for automated records and ≥98% for staff-entered records.

Document control (Clause 3.2.1): why version drift generates audit findings

Clause 3.2.1 requires a procedure to manage documents that form part of the food safety and quality management system. FSNS ranks it #7 among the most common non-conformities. The failure modes are consistent: lack of identification for controlled documents, incorrect revision dates, missing reasons for revisions, and obsolete versions still in use on the production floor. In practice, a shift supervisor following last year's temperature monitoring SOP because nobody removed the old laminated sheet from the wall is a 3.2.1 finding waiting to happen.

Flux does not replace your document management system, but it reduces the attack surface. When the temperature monitoring procedure references 'five-minute automated readings with hash-chained record IDs,' the procedure stays current because the system itself enforces the method. There is no gap between what the document says and what the operator does. Command tier's inspection pack and SFBB diary automation further reduce document-control risk by auto-generating the evidence artefacts the procedure describes, so the assessor sees the procedure and its output in the same record chain.

Implementation checklist

  • Review every SOP that references temperature monitoring, SFBB diaries, or corrective actions and update them to describe the automated workflow rather than the legacy paper process.
  • Remove obsolete paper forms from production areas and replace them with a QR code linking to the live Flux dashboard so the 'document in use' is always the current version.
  • Log SOP revision history inside your document management system with explicit links to the Flux record IDs that the procedure generates.
  • Include document control spot-checks in monthly internal audits and record results in Flux so assessors see governance, not just compliance.
  • Train new starters on the automated workflow first and only reference paper backups as contingency, reducing the risk of someone defaulting to an obsolete process.

Corrective action and root cause (Clause 2.11): the CAPA gap assessors always probe

Clause 2.11 of BRCGS Issue 9 requires sites to demonstrate that root cause analysis of non-conformities is used to implement permanent improvements and prevent recurrence. Assessors do not just want to see that a temperature excursion was fixed: they want evidence that the root cause was identified, that the corrective action addressed it, that verification confirmed the fix worked, and that a preventive measure was implemented to stop it happening again. The five-step structure (Trigger → Impact → Corrective Action → Verification → Prevention) maps directly onto Clause 2.11 requirements.

The Flux Excursion Register builds this structure automatically. When a sensor detects a temperature breach, the AUTO-DETECTED entry records the trigger and impact. Staff add the corrective action and verification. The system prompts for a preventive measure before closing the record. Every step carries the same record ID, timestamps, and staff attribution, so when the BRCGS assessor samples your last three excursions, they see a consistent, complete, auditable CAPA cycle without requesting additional documentation.

Implementation checklist

  • Ensure every Excursion Report follows the five-step structure: Trigger → Impact → Corrective Action → Verification → Prevention, as required by Clause 2.11.
  • Require root cause categorisation (equipment failure, human error, supplier issue, environmental factor) on every excursion so trend analysis is possible across audit cycles.
  • Set verification deadlines (e.g. within 12 hours for temperature excursions) and auto-escalate overdue verifications as CAPA items.
  • Present the last six months of CAPA closures at the BRCGS opening meeting to demonstrate a functioning root cause analysis system.
  • Link BRCGS CAPA evidence to your EHO inspection pack and SFBB diary using the same record IDs so dual-regulation sites never duplicate corrective action documentation.

Internal audit (Clause 3.4): proving you test your own controls

Clause 3.4 requires a scheduled programme of internal audits covering the HACCP plan, prerequisite programmes, and procedures implemented to achieve BRCGS certification. Assessors test this by sampling your internal audit records and checking whether findings led to corrective actions, whether those actions were verified, and whether the audit programme covers every section of the standard at least annually.

Flux supports internal audit evidence in two ways. First, the automated record system means your internal auditors can verify record completeness and data integrity in minutes rather than hours: they query the dashboard for gaps, missing verifications, or overdue CAPA items instead of flipping through binders. Second, the rehearsal logs that form part of the Management Confidence Statement double as internal audit evidence: when supervisors drill the inspection pack twice weekly and log retrieval times, those drills are auditable evidence that the management system is being tested, not just described.

Implementation checklist

  • Schedule internal audits of Flux record completeness monthly and log findings inside the same system so BRCGS assessors can see self-governance at work.
  • Include record integrity checks (hash verification, calibration certificate currency, CAPA closure rates) in every internal audit of HACCP CCPs.
  • Map each internal audit to the BRCGS section it covers and ensure all sections are audited at least once per year.
  • Use Flux rehearsal logs (retrieval times, participants, blockers) as evidence for Clause 3.4 — they prove the management system is operationally tested, not just documented.
  • Escalate any internal audit finding that remains open beyond 28 days as a formal CAPA item with management review.

Temperature monitoring and HACCP CCP evidence (Clauses 2.7–2.9)

BRCGS Issue 9 Clauses 2.7 through 2.9 require documented HACCP plans with identified Critical Control Points, established critical limits, and monitoring procedures capable of detecting loss of control in time for corrective action. For cold-chain operations, the CCP is temperature, the critical limit is the legal threshold (8 °C chilled, -18 °C frozen, 63 °C hot-hold), and the monitoring procedure must generate records that prove compliance was continuous: not just sampled twice daily.

Shield tier's 288 five-minute readings per day per sensor directly satisfy the monitoring requirement. Each reading is hash-chained, calibration-linked, and timestamped: the exact evidence structure BRCGS assessors expect when they sample CCP records. Command tier extends this into automated SFBB diary entries and Excursion Reports so the corrective action evidence required by Clause 2.11 is pre-built. Intelligence tier adds equipment performance data (compressor duty cycles, energy consumption) that supports preventive maintenance arguments under Clause 4.6 (equipment maintenance).

Implementation checklist

  • Ensure every CCP monitoring record references the HACCP plan clause it satisfies so assessors can cross-reference without interpretation.
  • Set critical limits one degree tighter than the legal threshold (7 °C for chilled instead of 8 °C) so corrective action begins before legal non-compliance is breached.
  • Log CCP monitoring frequency (five-minute intervals, 288 readings/day) in your HACCP plan so the documented procedure matches the actual system output.
  • Link every CCP deviation directly to an Excursion Report with the same record ID so Clause 2.11 corrective action evidence is automatically generated.
  • Present CCP monitoring density (288 vs 2 readings) at the BRCGS opening meeting to establish credibility before the assessor begins sampling.

Tier the BRCGS compliance story: Shield, Command, and Intelligence

Shield (£29/month) eliminates record control findings by replacing manual logs with 288 immutable, hash-chained, calibration-linked readings per day. The Clause 3.3.1 risk drops to near zero because every required data field: time, date, sensor ID, calibration reference, reading value: is captured automatically. Shield also satisfies CCP monitoring requirements (Clauses 2.7–2.9) at a density that makes twice-daily manual checks look negligent by comparison.

Command (£59/month) closes the document control and corrective action gaps. AUTO-DETECTED SFBB diary entries ensure Clause 3.2.1 compliance because the system generates the documentation the procedure describes, no gap between policy and practice. Excursion Reports with five-step CAPA structures satisfy Clause 2.11. Inspection packs tie all evidence to a single record ID so assessors can audit the full narrative without requesting supplementary files. Intelligence (£99/month) extends the evidence into equipment performance (Clause 4.6 maintenance evidence), overnight safeguarding (CQC supplement for dual-regulated sites), and Energy Intelligence ROI that funds the compliance investment.

Implementation checklist

  • Map each Flux tier to the specific BRCGS clauses it addresses and print the mapping on your pre-audit briefing sheet.
  • Quantify avoided NCs per tier: Shield targets zero Clause 3.3.1 findings, Command targets zero Clauses 3.2.1 and 2.11 findings, Intelligence adds equipment maintenance evidence for Clause 4.6.
  • Display tier badges (£29/£59/£99) with go-live dates and assigned owners on every audit preparation document.
  • Present tier ROI in terms assessors understand: NC count reduction, CAPA closure speed, and audit preparation time savings.
  • Share the tier roadmap with your BRCGS certification body so they understand your continuous improvement trajectory before the next audit.

Common mistakes

  • Treating BRCGS audit preparation as a one-week sprint before the assessor arrives instead of maintaining continuous audit-ready documentation through automated records.
  • Focusing exclusively on Section 4 infrastructure findings (59% of NCs) while ignoring Section 3 documentation gaps that are cheaper and faster to eliminate.
  • Closing corrective actions with a description of what was done but omitting root cause analysis, verification evidence, and preventive measures: the three elements Clause 2.11 specifically requires.
  • Using separate systems for BRCGS evidence and EHO/FHRS evidence, which doubles documentation effort and creates inconsistencies that assessors and inspectors both notice.
  • Quoting the 4.86-NC average as though it is a target rather than a baseline: continuous improvement means driving your own NC count below the average, not matching it.
Cut your BRCGS non-conformity count before the next audit
Shield (£29/month) replaces manual temperature logs with 288 immutable five-minute readings per day so record completeness (Clause 3.3.1) is never questioned. Command (£59/month) auto-populates SFBB diaries, reasoning-rich Excursion Reports, inspection packs, and Management Confidence Statements so document control (Clause 3.2.1) and corrective action evidence (Clause 2.11) arrive audit-ready. Intelligence (£99/month) layers the CQC supplement plus Energy Intelligence so equipment performance data and overnight safeguarding evidence share the same tamper-evident record ID your BRCGS assessor already trusts.

FAQ

What is the average number of non-conformities per BRCGS audit?

The BRCGS 2024–25 Annual Report and FSNS analysis both confirm an average of 4.86 non-conformities per audit across all BRCGS Food Safety Standard Issue 9 assessments. This includes initial certifications, recertifications, and both announced and unannounced visits. 59% of findings fall within Section 4 (site standards), with the remaining 41% spread across management commitment, hazard analysis, documentation, product control, process control, and personnel.

Which BRCGS clauses are most commonly failed?

The top five clauses by finding count are all in Section 4: Clause 4.11.1 (cleanliness/hygiene, 4,715 findings), Clause 4.6.2 (equipment design, 3,322), Clause 4.9.1.1 (chemical handling, 3,284), Clause 4.4.8 (doors, 3,007), and Clause 4.4.1 (walls, 2,933). In Section 3, record control (Clause 3.3.1, ranked #9) and document control (Clause 3.2.1, ranked #7) are the most common documentation failures.

How does automated temperature monitoring reduce BRCGS non-conformities?

Automated monitoring eliminates the root causes of Clause 3.3.1 (record control) findings: missing timestamps, blank verification fields, and incomplete corrective action logs. A sensor generating 288 hash-chained readings per day satisfies CCP monitoring requirements (Clauses 2.7–2.9) at a density that manual logs cannot match. Command-tier SFBB diary automation further reduces Clause 3.2.1 (document control) risk by ensuring the evidence the procedure describes is actually generated.

Can the same compliance documentation satisfy both BRCGS and EHO inspections?

Yes. BRCGS Clause 3.3.1 (record control) and FHRS 'confidence in management' both test the same capability: can you produce complete, accurate, verifiable records on demand? When Daily Logs, SFBB diaries, Excursion Reports, and inspection packs share the same record IDs, a single evidence pack satisfies both frameworks. The Section 21 due diligence language in the inspection pack also maps to BRCGS Clause 2.11 corrective action requirements.

How quickly must BRCGS corrective actions be closed?

BRCGS requires evidence of corrective action within 28 calendar days for major non-conformities. Critical NCs may require immediate closure with re-audit. Minor NCs are typically reviewed at the next surveillance audit. Flux's Excursion Register accelerates this timeline by auto-generating the five-step CAPA structure (Trigger → Impact → Corrective Action → Verification → Prevention) within minutes of an event, so the evidence exists before the 28-day clock becomes a concern.

Keep exploring

Recommended tools

Sources