Skip to content

Insights/·12 min read

Deployer Obligations Under Article 26: Complete Checklist

Most SMEs are deployers, not providers — and the Article 26 obligations are narrower but sharper than the provider list. Here's the complete checklist, with what evidence an auditor will ask for.

By Mehmet Köse · Founder, Synapsrix

Most European SMEs that “use AI” are deployers: they put high-risk systems into service under their authority without being the provider that trained the model. Article 26 is therefore the article you operationalise in HR, finance, and customer operations — not Annex IV drafting, which sits primarily with providers.

This article walks Article 26(1)–(12) as in the official text of Regulation (EU) 2024/1689, lists evidence a Dutch market surveillance authority is likely to request, and ends with a 25-line checklist you can paste into your GRC tool. Recital 72 explains why transparency and instructions for use matter for deployers — keep your provider’s instruction pack next to this article.

If you searched for “AI Act deployer obligations” or “Article 26 AI Act”, this is the operational translation — citations point back to the OJ text, not paraphrases from slide decks.

Deployer vs provider — why the distinction matters more than the label

Microsoft is the provider for Microsoft 365 Copilot in the standard commercial model: it places the system on the market and puts it into service. Your company is the deployer when you enable the capability for users, configure data access, and integrate outputs into workflows. You did not train the foundation model, but you do decide where it runs, who may rely on it, and which personal data enters the prompt.

That separation is why Article 26 exists as a standalone list. It also explains why procurement clauses that say “vendor handles everything” are incomplete: several duties are by design deployer-side (workplace information, deployer-side logs, your DPIA).

Article 25 clarifies how obligations allocate across the value chain — importers and distributors carry their own duties, but deployers remain accountable for deployment facts your vendor cannot see: internal policies, role-based access, and which business decisions attach to model outputs.

Article 26 in full, walked paragraph-by-paragraph

Below, paragraph numbers match the official English version of Article 26 in Regulation (EU) 2024/1689.

Article 26(1) — Use per instructions (technical and organisational measures)

Deployers must take appropriate technical and organisational measures to use high-risk systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6 of Article 26 (the official text cross-refers internally to the organisational freedom clause and the logging clause).

Evidence: signed configuration standard, change control showing instruction compliance, training records for administrators.

Article 26(2) — Human oversight assignment

Deployers shall assign human oversight to natural persons with the necessary competence, training, authority, and support.

Evidence: role description, reporting line, competence matrix, escalation chart.

Design alignment: read Article 14 alongside Article 26(2) — providers must implement human-machine interface measures; deployers must staff those interfaces with people who can actually override unsafe outputs. If your procedure says “manager may override” but the UI hides the override, your evidence is incoherent.

Article 26(3) — Organisational freedom

Paragraph 3 clarifies that Article 26(1) and (2) do not remove other deployer obligations under Union or national law and preserves freedom to organise resources — not an exemption, but a scope note.

Evidence: short legal memo in file acknowledging collective labour law overlays.

Article 26(4) — Input data relevance and representativeness

Where the deployer controls input data, it shall ensure inputs are relevant and sufficiently representative for the intended purpose.

Evidence: data dictionary, sampling methodology, bias review notes.

If you do not control inputs — for example, a vendor-hosted black box — document that fact and shift evidence to Article 26(1) compliance with instructions and vendor assurances, while still monitoring output quality under Article 26(5).

Article 26(5) — Monitoring, risk, serious incidents

Deployers shall monitor operation per instructions; where relevant, inform providers in accordance with Article 72 (post-market monitoring context). If use in line with instructions may still present a risk under Article 79(1), inform provider/distributor and market surveillance authority, and suspend use where appropriate. Serious incidents trigger immediate information to provider and subsequent chain parties; if provider cannot be reached, Article 73 applies mutatis mutandis to deployer reporting.

Article 72 is the provider’s post-market monitoring framework; Article 73 is the serious incident reporting mechanism. Deployers should not conflate them — but Article 26(5) explicitly links monitoring to Article 72 for provider information flows.

Evidence: monitoring runbooks, ticket templates, serious incident decision tree, Article 73 clock tracker.

Operationalise suspension: if you cannot suspend because the business refuses, document the risk acceptance and escalate — Article 26(5) expects suspension when the risk test is met.

Article 26(6) — Log retention (six months minimum)

Deployers shall retain automatically generated logs under their control for a period appropriate to the intended purpose, at least six months, unless law specifies otherwise.

Evidence: retention policy excerpt, storage location, access controls.

Clarify which logs are “automatically generated” by the high-risk system versus your application logs — many enterprises must merge evidence from both to reconstruct an incident. If logs contain personal data, align retention with GDPR storage limitation — six months is a floor under the AI Act, not necessarily your ceiling if another law requires longer.

Article 26(7) — Workplace deployment information

Before putting into service or using a high-risk AI system at the workplace, employer deployers shall inform workers’ representatives and affected workers in line with Union and national labour rules.

Evidence: notification letters, works council minutes (where applicable).

Netherlands / Germany: works council timing can delay rollouts — treat that as lawful friction, not “IT delay.” Article 26(7) is binding; your programme plan should include consultation windows.

Article 26(8) — Public authority registration check

Public-authority deployers must comply with Article 49 registration obligations; if a system is not registered in the EU database under Article 71, they must not use it and must inform provider/distributor.

Evidence: registration screenshots, procurement clause for public-sector buyers.

Private SMEs often skip this paragraph — verify entity status after reorganisations; a semi-public body may cross thresholds unexpectedly.

Article 26(9) — DPIA alignment with Article 13 information

Deployers shall use Article 13 information to comply with Article 35 GDPR DPIAs (or Article 27 Directive (EU) 2016/680 where applicable).

Evidence: DPIA annex referencing Article 13 pack, update history.

Article 26(10) — Law enforcement post-remote biometric identification

Special rules for post-remote biometric identification in criminal investigations: authorisation requirements, limits on use, documentation in police files, annual reports — only relevant for specific law-enforcement deployers.

Evidence: legal review, national procedural checklist.

Article 26(11) — Informing natural persons subject to use

Without prejudice to Article 50, deployers of high-risk Annex III systems that make decisions or assist decisions relating to natural persons shall inform those persons they are subject to the system.

Evidence: UX copy, email templates, audit of who was informed when.

Coordinate with Article 86 where applicable — individuals may have explanation rights for certain outputs; Article 26(11) is the baseline transparency requirement for being subject to the system.

Article 26(12) — Cooperation

Deployers shall cooperate with competent authorities implementing the Regulation.

Evidence: named contact, authority correspondence log.

Cooperation includes prompt production of logs and procedures — treat regulator requests like DPA audits, with legal and technical leads paired.

What evidence the AP will want

Dutch Autoriteit Persoonsgegevens will not ask for generic “AI policies.” Expect artefacts tied to each paragraph: assignments, logs, notifications, DPIA links, and incident records. Align your ISMS evidence to Article 26(6) retention and Article 26(5) monitoring.

For serious incidents, your file should show who decided the incident was “serious,” when the provider was notified, and when market surveillance was contacted — Article 73 defines provider obligations precisely; deployers should mirror the information chain under Article 26(5).

If you operate in financial services, note Article 26(5) and Article 26(6) second subparagraphs: internal governance rules under sectoral law can deem monitoring or logging obligations fulfilled when aligned — you still need traceability to show which instrument maps to which AI system.

Netherlands-specific: who receives reports?

National allocation for market surveillance is still converging; AP is the pragmatic default contact for Dutch organisations when personal data and AI supervision intersect. Do not treat this article as legal advice on competent authority mapping — confirm with counsel when your sector has a sectoral regulator (for example, health or finance).

Integration with existing processes

Existing programmeHow to reuse without duplication
ISO 27001Change management + incident tickets → Article 26(5)
GDPR DPIAExtend with Article 13 inputs → Article 26(9)
Works council processSame cadence as Article 26(7) notifications
SOC monitoringMap serious incident triggers to Article 73

RACI that actually works

Assign one “AI system owner” per high-risk deployment — not a committee. Committees review; owners decide. The owner should jointly cover business outcome and risk acceptance, with DPO consultation on Article 26(9) and Article 27 where triggered.

HR-specific notes

For performance management tools, pair Article 26(7) notifications with GDPR transparency — employees should not receive three contradictory emails about the same rollout. Use a single comms pack with legal clearance.

Procurement clauses to verify

  • Instructions for use updates when models change
  • Incident notification to deployer within contractual timelines stricter than law where needed
  • Log export capabilities for Article 26(6) retention
  • Subprocessor transparency for foundation-model chains (Article 25 alignment)

When Article 26 + Article 27 together trigger a FRIA

Article 27 requires Fundamental Rights Impact Assessments for public bodies and certain private deployers of high-risk systems under Annex III point 5(b) and 5(c) (essential services/benefits). If you are in scope, Article 27(6) allows reliance on existing DPIAs where possible — update, do not duplicate.

Article 27(1) lists minimum content — intended purpose, period, affected persons, human oversight, accuracy metrics, risks to safety and rights, mitigation, and consultation where relevant. That is not a one-page checkbox; it is a structured review that should cross-reference your Article 26(4) data controls and Article 26(11) transparency plan.

Private-sector insurance or banking teams should pay special attention to point 5 FRIA triggers — your product legal team may already produce conduct risk analyses; align them with Article 27 headings to avoid parallel silos.

The incident reporting timeline under Article 73

Article 73 sets serious incident reporting obligations for providers; Article 26(5) makes deployers inform providers and, where provider contact fails, applies Article 73 mutatis mutandis to deployers. Timelines in Article 73 include 15-day reporting in the standard case and shorter windows for fatalities and certain widespread incidents — read the provision verbatim in your runbook rather than paraphrasing from memory.

Do not quote deadlines from blog summaries — paste the current Article 73 text into your internal wiki and version it when the Commission publishes consolidated guidance.

For personal-data breaches, still run GDPR Articles 33–34 where applicable — the clocks are different.

Who is “aware” for reporting purposes?

Operational clarity matters: awareness should attach when your AI owner or security team has actionable information, not when the board minutes are signed. Define business hours escalation for Article 26(5) serious incidents so a Friday evening model outage does not wait until Monday for provider contact without documented rationale.

Practical mapping for security operations centres

Your SOC may already track availability and integrity incidents. Article 73 “serious incident” is not identical to personal data breach. Train triage staff to ask: did model behaviour cause a safety or fundamental-rights harm pathway? If yes, escalate to the AI owner even when no personal data exfiltration occurred.

Checklist you can actually use

The checklist below maps Article 26 and adjacent duties. It is not exhaustive for providers — if you are also placing systems on the market, add Articles 16–19 tasks.

  • 26(1) Configuration standard matches instructions for use
  • 26(2) Named human overseers with competence matrix
  • 26(3) Labour-law overlays documented
  • 26(4) Input data dictionary + representativeness review
  • 26(5) Monitoring runbook + Article 72/73 escalation
  • 26(6) Log retention ≥6 months (or lawful exception)
  • 26(7) Workplace notifications completed
  • 26(8) Public-sector registration check performed (if applicable)
  • 26(9) DPIA updated with Article 13 inputs
  • 26(10) Law-enforcement biometric rules reviewed (if applicable)
  • 26(11) Individual notices for Annex III decision support
  • 26(12) Authority cooperation contact named
  • Article 27 FRIA completed (if triggered)
  • Article 50 transparency satisfied where relevant
  • Article 73 serious incident template aligned with Article 26(5)
  • GDPR 33–34 breach playbook cross-linked
  • Works council evidence filed (NL/DE)
  • Vendor instruction pack version-controlled
  • Model change review cadence defined
  • Suspension criteria documented for risky outputs
  • Training records for admins and overseers
  • Access controls to logs documented
  • Third-country transfers reviewed for prompts (GDPR)
  • Equality review for HR systems (FRIA/DPIA)
  • Board reporting on high-risk deployments
  • Annual review date scheduled

Downloadable version

If you need this list as a standalone PDF, export it from your workspace after signup or paste into your GRC tool — the markdown version above is the canonical checklist text for version control.

Start the documentation

Use the scanner to confirm high-risk classification, then open pricing if you need Growth-tier exports that cover Annex IV-style documentation bundles in your workspace. Honest limitation: full technical documentation generation is provider-heavy; deployers should still own Article 26 evidence first.

The signup flow can produce PDF exports for governance packs — check your plan’s export limits before you promise the board a pack date.

Auditor interaction model

When external auditors test Article 26, they should sample live configurations against instructions for use, not policy PDFs alone. Prepare three exemplar tickets: a normal oversight intervention, a near-miss escalation, and a negative test proving suspension can occur. Auditors trained only in ISO may ask for ISO evidence — map each request to Article 26 paragraphs to avoid duplicate controls.

Retention of evidence

Store versioned instruction PDFs when vendors update quarterly. If you cannot prove which instruction applied on go-live date, your Article 26(1) story weakens even if you are compliant today.

Closing reminder on Recital 72

Recital 72 stresses that high-risk systems should be accompanied by meaningful instructions so deployers can evaluate functionality and limits. If your vendor’s pack is marketing fluff, push for Annex A-style technical annexes or refuse go-live for high-risk workflows until documentation quality improves — Article 26(1) is difficult to meet with glossy one-pagers.