Back to blog
8 min readGateco Team

93 Days to EU AI Act Enforcement: The Practical Annex III Mapping for RAG Pipelines

August 2, 2026 is 93 days away. That is the date when Annex III of the EU AI Act — the high-risk AI systems chapter — becomes enforceable. If your AI product makes or influences decisions in employment, credit scoring, insurance, healthcare, education, or law enforcement, you are in scope. The maximum penalty for non-compliance is €15M or 3% of global annual turnover, whichever is higher.

Before getting into the mapping, a few scoping questions matter. First: extraterritorial reach. The AI Act follows the GDPR model — if your system affects people located in the EU, the Act applies regardless of where your company is incorporated. Second: the current legislative state. As of May 2026, the Digital Omnibus Act proposal has not been formally enacted. Treat August 2, 2026 as binding until a new date is officially published.

Which RAG pipelines are in scope?

Not every RAG deployment is high-risk. The trigger is use-case, not technology. If your AI assistant helps HR teams screen candidates, that is Annex III (employment). If it helps loan officers assess creditworthiness, that is Annex III (essential services). If it assists medical teams with clinical research, that is Annex III (healthcare). If your RAG pipeline is an internal productivity tool or a customer service chatbot with no decision-making authority, it is likely out of scope for the high-risk chapter — though the general AI literacy and transparency obligations may still apply.

Article 9 — Risk management system

Article 9 requires an ongoing risk management system throughout the AI system's lifecycle. For a RAG pipeline, the practical implementation is policy versioning: every change to who can access what should be tracked, reviewable, and reversible. Access Simulator dry-runs let teams validate policy changes before activating them. Policy version history with diff view gives compliance teams the documentation trail Article 9 requires.

Article 10 — Data governance

Article 10 requires governance practices covering data sources, collection, processing, and classification. For retrieval-augmented systems, classification labels on every resource are the practical implementation. When a chunk is retrieved, the enforcing layer should know its classification (public / internal / confidential / restricted) and apply the correct ABAC ceiling. Classification that only exists in a spreadsheet but is not enforced at query time satisfies the documentation requirement but fails the spirit of Article 10.

Article 12 — Record-keeping

This is the most concrete requirement for RAG pipelines: automatically log events to allow post-hoc monitoring and traceability. A generic application log that shows HTTP requests is not sufficient. You need retrieval-level logging: principal ID, resource ID, policy that governed the decision, the decision itself (allow/deny), timestamp, and search mode. 25 event types covering every retrieval decision, 90-day default retention, and export capability (CSV/JSON) for audit deliverables.

Articles 14 and 15 — Human oversight and robustness

Article 14 requires that natural persons can oversee, intervene in, and halt the AI system. A policy approval workflow — where policy changes go through a human review step before activation — is the RAG-layer implementation. Any policy should be deactivatable instantly, with denial reasons always surfaced in the audit log. Article 15 (accuracy, robustness, cybersecurity) maps to fail-closed behavior: a policy evaluation error denies rather than allows, circuit breakers prevent cascade failures, and latency SLOs ensure the authorization layer does not become the bottleneck.

The gap no single tool fills

The AI Act requires a system-level approach. Gateco addresses the technical access control and audit trail requirements (Articles 9, 10, 12, 14, 15) for the retrieval layer. You will still need model documentation, a conformity assessment process, and registration in the EU AI database. Think of retrieval-layer compliance as one component of a broader AI governance programme — the one that answers the question "who accessed what knowledge, under what policies, and when?"

The enforcement deadline is 93 days out. If you are building a high-risk AI system and do not yet have retrieval-level access controls and an audit trail, now is the time to address it. The [free 1-hour compliance audit](/contact?source=ai-act-audit) is a fast way to map your specific pipeline to the obligations that apply to it.


Ready to secure your AI retrieval?

Start with the free tier — 100 retrievals/month, no credit card required.