Skip to content

Insights/·13 min read

Annex III Decoded: Is Your AI System High-Risk?

Annex III is the list that determines whether your AI tool needs a €30K documentation exercise or a checkbox. We go through every category, and the Article 6(3) escape hatch most companies don't know about.

By Mehmet Köse · Founder, Synapsrix

Annex III to Regulation (EU) 2024/1689 is where “boring” enterprise AI becomes legally heavy. Article 5 gets headlines because prohibited practices sound dramatic, but most Dutch and DACH SMEs will never ship a real-time biometric surveillance system. They will ship or buy HR analytics, customer triage, and credit or insurance decision support. Those are Annex III cases if the intended purpose matches the Annex, unless Article 6(3) removes the high-risk label or the system is out of scope entirely.

This article walks the Annex point by point, explains the Article 6(3) exception with the profiling carve-out, and ends with a decision path you can paste into an internal standard. For deployer obligations once you are high-risk, read the Article 26 checklist.

How to read Annex III legally

Annex III is not a “feature list.” Classification depends on intended purpose communicated by the provider and reasonably foreseeable misuse you enable through configuration. That is why two companies can buy the same SaaS product and end up in different risk tiers: one uses it to draft neutral text; the other wires it into promotion decisions with HRIS identifiers.

When you document classification, capture: business process, population affected, human decision points, inputs (including special categories), outputs (scores, ranks, recommendations), and whether outputs constrain human discretion. Those facts determine whether Annex III applies, not the vendor’s “AI” branding.

Why Annex III matters more than Article 5

Article 5 bans a closed list of practices (subject to narrow exceptions). Compliance is binary: do not build or procure that capability.

Annex III is different: it lists classes of AI systems that are presumptively high-risk under Article 6(2) if they fall under one of the headings. Most SME risk sits in employment, essential services, and customer-facing systems that determine access or evaluate people.

The policy logic is in Recitals 48–63: high-risk systems are those that can adversely affect health, safety, or fundamental rights in sensitive contexts. Annex III operationalises that idea.

The 8 Annex III categories, walked through

1. Biometrics

Annex III point 1 covers remote biometric identification systems (excluding those already prohibited under Article 5(1)(d) in publicly accessible spaces), biometric categorisation according to sensitive attributes, and emotion recognition in certain settings.

Biometrics intersect Article 5 prohibitions quickly. If you are not a law-enforcement agency, real-time remote biometric identification in publicly accessible spaces is a prohibited practice under Article 5(1)(d) subject to narrow listed exceptions. Annex III point 1 is the high-risk path for permitted remote biometric identification uses outside that prohibited band, plus categorisation and emotion recognition as described in the Annex.

SME-scale example: a retail chain trialling face-based queue analytics that categorises shoppers by age band for staffing. If the system performs biometric categorisation in scope, you are not in a “marketing analytics” grey zone — you are in Annex III point 1.

Documentation burden: high-risk system obligations for the provider; deployers must implement Article 26 and verify Article 50 does not replace Chapter III here. Expect technical documentation requests to your vendor and DPIA updates if special categories of data appear.

Operational tip: many “anonymous video analytics” products still process biometric data under GDPR definitions. If your legal team already flagged special-category processing, do not downgrade to “minimal risk” under the AI Act without reconciling both frameworks.

2. Critical infrastructure

Annex III point 2 targets AI intended as safety components in digital infrastructure or road traffic, and in water, gas, heating, or electricity supply.

Most SMEs do not run a national power grid, but they do operate fleet management, building management, or industrial control integrations where an AI component participates in safety decisions. The question is whether the AI is a safety component as meant in the Union harmonisation acts referenced in Annex I — not whether your company is “critical infrastructure” colloquially.

SME-scale example: a regional logistics operator uses AI to prioritise traffic routing for hazardous materials where failure could affect public safety. If the system is a safety component in scope of the listed sectors, Annex III point 2 applies.

Documentation burden: align vendor risk management evidence with your internal change control; tie monitoring to service-level incidents that could trigger Article 73 serious incident reporting for the provider.

If your OT/IT boundary is messy, map who can change model weights or prompts in production — that person is part of your human oversight story under Article 26(2) even if they sit in operations, not AI.

3. Education and vocational training

Annex III point 3 covers systems that determine access or assign people in education, evaluate learning outcomes, or monitor prohibited behaviour in tests.

Vocational training includes corporate academies when they gate certifications that affect employment. A learning platform that only recommends courses but cannot alter job progression is weaker on Annex III than one that blocks promotion until a model says you passed a module.

SME-scale example: a training vendor’s platform that automatically scores professional certifications and gates who may sit the next exam.

Documentation burden: strong overlap with GDPR fairness and Article 22 automated decision rules; add Article 14 human oversight evidence for high-risk design.

If students or employees can appeal model errors through a human with full file access, document that path — it supports both GDPR fairness arguments and Article 14 oversight design.

4. Employment, workers management, and access to self-employment

Annex III point 4 is the common SME flashpoint: recruitment, filtering, task allocation, promotion or termination decision support, and performance or behaviour evaluation.

The Annex text explicitly references advertising vacancies, screening applications, and decision-making on recruitment or termination. The trap is “assistive” wording in sales collateral. If the system ranks candidates or recommends hire/no-hire with only cursory human review, treat the human as oversight, not as the decision-maker — Article 14 and Article 26(2) require real authority and competence, not rubber-stamping.

SME-scale examples:

  • CV screening that ranks applicants (point 4(a)).
  • Shift scheduling that materially affects pay or working conditions through automated allocation (point 4(c)).
  • Internal “copilots” that score sales staff monthly where scores feed into bonuses (point 4(c) evaluation of performance).

Self-employment access (point 4(b)) matters for platforms matching freelancers to gigs when the AI determines eligibility or priority in ways that resemble worker management — classification is fact-specific.

Documentation burden: deployer-heavy. Expect Article 26(7) workplace information, Article 26(11) transparency to individuals, DPIA updates, and works council timing in NL/DE.

5. Access to essential private and public services and benefits

Annex III point 5 includes creditworthiness (point 5(b)(cc) except narrow fraud-only cases), risk scoring for life and health insurance, emergency service dispatch prioritisation, and public-benefit eligibility.

Creditworthiness is explicitly called out; the Annex distinguishes purely fraud-detection use cases from broader scoring. If your model outputs a probability of default or similar for underwriting, assume Annex III unless legal review accepts a Article 6(3) pathway with Article 6(4) documentation.

Insurance paths for life and health risk assessment sit in point 5(b)(dd) — do not conflate them with property insurance unless the Annex text covers it.

SME-scale example: a BNPL fintech’s affordability model that determines whether a consumer receives credit beyond fraud checks.

Documentation burden: financial regulator overlays may already exist; AI Act adds conformity and post-market expectations that are not identical to banking rules.

Emergency services: point 5(a) includes dispatch or prioritisation for ambulance, fire, and law enforcement response. A private monitoring centre selling AI triage to municipalities is likely provider; the municipality is deployer with Article 26 duties to the workforce and public.

6. Law enforcement

Annex III point 6 lists law-enforcement uses (evaluation of reliability of evidence, crime analytics, deepfake detection in scope, etc.). Private SMEs rarely deploy these directly; procurement to agencies still needs supply-chain diligence.

If you sell into police forces, expect Article 26(10)-style biometric rules for certain deployments and Article 27 FRIA questions for public bodies. Your customer’s legal basis chain is not your AI Act excuse: provider obligations remain.

7. Migration, asylum, and border control

Annex III point 7 is largely public-sector. Private vendors may still be providers to governments — that is a provider compliance programme, not a deployer SME checklist.

Vendor diligence should map which biometric functions exist, whether Article 5 prohibitions are implicated, and how human review is implemented at borders — a sensitive area for fundamental rights.

8. Administration of justice and democratic processes

Annex III point 8 covers systems that research or interpret facts and law for judicial application, influence elections, or support political campaigning. Private SMEs might touch legal-tech tools; classification must be careful about whether the tool assists a judge versus supports internal corporate counsel (the latter is usually outside point 8).

Election-related tools deserve extra caution: even ancillary analytics can trigger point 8(d) if intended to influence campaigns. Legal review should not rely on a product manager’s feature name.

The Article 6(3) exception (read this carefully)

Article 6(3) states that an Annex III system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing decision outcomes. The provision then lists four alternative narrow cases (Article 6(3)(a)–(d)): narrow procedural tasks; improving prior human work; detecting decision-making patterns without replacing human assessment; preparatory tasks.

Each carve-out is narrower than vendors imply:

  • (a) Narrow procedural task: think formatting, routing, or scheduling without materially affecting outcomes — not “narrow” because your UI is small.

  • (b) Improving the result of a previously completed human activity: the human outcome exists before the AI runs; the AI refines it. If the AI selects which applicants proceed, that is not “previously completed.”

  • (c) Detecting patterns / deviations: allowed when the system is not meant to replace or influence the prior human assessment without proper human review. If your HR team never reviews flagged outliers, you are not living inside (c).

  • (d) Preparatory task: research or summarisation that feeds a later human decision can qualify — but if the summary frames the decision with selective emphasis, influence questions return.

The second subparagraph is blunt: if the system performs profiling within the meaning of GDPR Article 4(4), it must be treated as high-risk regardless of the four cases.

Provider documentation: Article 6(4) requires a provider who claims non-high-risk to document the assessment before placing on the market or putting into service and to comply with Article 49(2) registration obligations for that declaration pathway.

Honest warning: the Commission must publish guidelines and examples (Article 6(5)). Until then, conservative counsel will still flag many borderline HR tools as high-risk. Your job is not to win a debate — it is to record why a carve-out applies if you rely on one.

Ambiguity alert: “Material influence” on outcomes is not defined with numeric thresholds. Good practice is to document which decisions can still be made without the AI output and what happens if the AI is switched off.

Common gray areas

GitHub Copilot in employment: the coding assistant is not magically “HR Annex III” because developers have jobs. If outputs do not determine recruitment, promotion, or performance scoring, the Annex III hook is usually absent. If you fine-tune or wrap the tool to score engineers against each other for bonuses, you have moved into point 4(c) evaluation territory.

Resume-screening tools: the provider must meet Chapter III for the high-risk system; the deployer must still execute Article 26. Your procurement mistake is to assume the vendor’s CE marking ends your duties.

HR chatbots: if the bot schedules interviews only, it may fall under Article 6(3)(a) if the significant-risk test is met and no profiling exists. If the bot ranks candidates or extracts traits for filtering, you are squarely in high-risk territory.

Internal IT service desks that triage tickets with AI rarely touch Annex III unless the triage determines employment outcomes (for example, HR case prioritisation affecting investigations). Document the downstream workflow — not the brand on the box.

Marketing personalisation is usually not Annex III unless it intersects political or democratic processes (point 8) or biometric categorisation (point 1). Still run GDPR checks for profiling and Article 22 where automated decisions have legal effects.

Common SME toolMost likely classificationWhy
Generic email drafting assistantMinimal / non–high-riskNo Annex III intended purpose absent special use
ATS ranking + automated rejectionsHigh-risk Annex III point 4Influences access to work
Customer support bot (first-line Q&A)Often Article 50 transparencyMay still be high-risk if it determines access to services
Credit scoring beyond fraudHigh-risk Annex III point 5Creditworthiness scoring path
Workplace emotion inference from videoProhibited or high-risk biometricArticle 5 and point 1 interactions — legal review mandatory

A classification decision tree

Use this as an internal worksheet:

  1. Article 5 screen
    If the system enables a prohibited practice (subject to exceptions), stop — you must not deploy.

  2. Annex I product-safety route (Article 6(1))
    If the AI is a safety component of machinery, toys, medical devices, or other Union harmonisation legislation in Annex I, classify via Article 6(1).

  3. Annex III match
    If intended purpose matches a point, assume high-risk under Article 6(2) unless step 4 applies.

  4. Article 6(3) significant-risk test + carve-out
    Document why harm is not significant and which (a)–(d) applies; confirm no profiling under GDPR 4(4).

  5. Article 50
    If not high-risk, check Article 50 transparency for listed interactions (for example, chatbots).

  6. Minimal risk
    Voluntary codes only.

ASCII outline:

Article 5 prohibited? ──yes──► Do not deploy
        │no
Article 6(1) Annex I? ──yes──► High-risk (product safety)
        │no
Annex III match? ──no──► Check Article 50 / minimal
        │yes
Article 6(3) + no profiling? ──yes──► Document per Art 6(4)
        │no
High-risk (Annex III) ──► Chapter III + deployer Art 26

What to do when you've identified a high-risk system

  1. Assign a single business owner and a technical owner.
  2. Pull instructions for use and logging capabilities from the provider.
  3. Draft your Article 26 evidence pack before you argue about Annex IV aesthetics.
  4. Integrate procurement: Article 25 value-chain responsibilities matter in disputes.

Annex IV is mostly the provider’s job — but you feel it

Annex IV lists sections of technical documentation for high-risk systems (general description, development design, monitoring, etc.). Deployers rarely author Annex IV, yet Article 26(1) requires operating consistently with instructions for use produced under that regime. When a regulator asks questions, your first phone call may still be internal: show how you constrained inputs, who approved roll-out, and which logs prove monitoring occurred.

Deep dive next: Deployer obligations under Article 26.

Scan your portfolio

If you want structured scoring without commissioning a month-long project, run the scanner and export a first-pass classification map. Pair it with legal review on any system touching HR, credit, or insurance — those three categories generate the majority of Annex III exposure in SMEs.

Closing honesty: where guidance still lags

The Regulation is clear on profiling and on the four carve-outs, but borderline “material influence” cases will stay disputed until Commission Article 6(5) examples arrive. Your risk committee should treat classification memos as versioned artefacts: when facts change — new integration, new geography, new model — the memo updates.

Also separate security risk from Annex III risk. A customer-support bot can be low on cyber exposure yet high on fundamental rights if it determines access to essential services under point 5. Conversely, a narrowly scoped FAQ bot may only trigger Article 50 transparency.

If you maintain a single enterprise catalogue in Dutch and English, store the intended purpose field in both languages to avoid teams talking past each other — “recruitment assistant” in one business unit may be “workforce analytics” in another.