EU AI Act Compliance Guide for Spanish Companies 2026
Penalties up to EUR 35 million. Deadline: 2 August 2026. Spain is already watching.
Regulation (EU) 2024/1689 — the EU AI Act — is the world’s first comprehensive legal framework for artificial intelligence. It entered into force on August 1, 2024. The prohibited practices provisions have been enforceable since February 2, 2025. The core compliance obligations for high-risk AI systems apply from August 2, 2026. Companies that are not ready face fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher.
Spain occupies a distinctive position in this landscape. The country was the first EU member state to create a dedicated AI supervisory authority: AESIA (Agencia Española de Supervisión de Inteligencia Artificial), established by Real Decreto 729/2023 and headquartered in A Coruña. This institutional head start means Spanish companies face a more mature enforcement environment than their counterparts in most other EU jurisdictions.
This guide is directed at mid-market companies using AI in employment screening, credit decisions, biometric systems, and customer service automation — precisely the sectors most exposed to high-risk classification under the regulation. The analysis that follows covers the legal framework, the risk classification system, every compliance deadline through 2027, the obligations for high-risk systems, GPAI implications, and a concrete 6-12 month readiness roadmap.
What is the EU AI Act?
Regulation (EU) 2024/1689 on Artificial Intelligence — commonly referred to as the EU AI Act — was published in the Official Journal of the European Union on July 12, 2024, and entered into force on August 1, 2024. It is not a directive requiring national transposition: it is a directly applicable regulation, binding in its entirety in all EU member states without additional legislative steps.
The regulation applies to:
- Providers: organisations that develop or have developed an AI system with the intention of placing it on the market or putting it into service under their own name.
- Deployers: organisations that use an AI system under their own authority, except for personal non-professional use.
- Importers and distributors: organisations that place AI systems from outside the EU on the EU market, or that make AI systems available in the EU market.
The regulation has significant extraterritorial reach. Providers established outside the EU are covered if their system’s output is intended to be used in the EU. A US-based SaaS company selling an AI-powered HR tool to a Spanish company falls within the regulation’s scope: the Spanish company as deployer, and the US company as a provider subject to EU rules.
For most Spanish mid-market companies building AI tools for internal use, the practical consequence is that they are simultaneously providers and deployers — a posture that attracts the full weight of compliance obligations.
The 4 risk tiers
The regulation establishes a four-tier risk classification that determines the obligations applicable to any AI system.
| Tier | Examples | Obligations | Applicable to | Applicable from |
|---|---|---|---|---|
| Unacceptable risk (prohibited) | Social scoring by public authorities; subliminal manipulation; real-time remote biometric surveillance in public spaces (with narrow exceptions); AI exploiting vulnerabilities of specific groups | Absolute prohibition — system cannot be placed on the market or put into service | Providers and deployers | 2 February 2025 |
| High risk | CV screening tools; credit scoring; access to public benefits; biometric identification; critical infrastructure safety; medical device AI; law enforcement AI | Risk management system, data governance, technical documentation, logging, transparency, human oversight, conformity assessment, CE marking, post-market monitoring | Primarily providers; deployers have significant obligations under Art. 26 | 2 August 2026 (Annex III); 2 August 2027 (Annex I) |
| Limited risk (transparency) | Customer-facing chatbots; deepfake-generated content; emotion recognition systems informing commercial decisions | Obligation to inform users they are interacting with AI; label synthetic content as AI-generated | Providers and deployers | 2 August 2026 |
| Minimal risk | Spam filters; product recommendation engines; route optimisation for logistics | No specific regulatory obligations; voluntary codes of conduct encouraged | — | — |
Key dates for Spanish companies
Compliance under the EU AI Act is phased across three years. These are the legally binding milestones:
August 1, 2024 — Entry into force. The regulation is formally applicable law across the EU.
February 2, 2025 — Prohibited practices ban. AI systems classified as unacceptable risk are illegal from this date. Any company operating social scoring systems, subliminal manipulation tools, real-time mass facial recognition in public spaces, or AI that exploits cognitive vulnerabilities of specific groups must have ceased operating those systems.
August 2, 2025 — GPAI model obligations. Providers of general-purpose AI models (foundation models such as GPT, Llama, Claude, Mistral, and equivalent systems) must produce technical documentation (Annex XI), adopt a copyright compliance policy per Directive 2019/790, and publish a summary of training data.
August 2, 2026 — Core obligations apply. Risk classification for all AI systems in scope. Full obligations for high-risk systems listed in Annex III (employment, credit, biometrics, critical infrastructure, essential services, law enforcement, migration, justice, democratic processes). Transparency obligations for limited-risk systems. Deployer obligations under Art. 26 enforceable.
August 2, 2027 — Annex I high-risk systems. Extended deadline for AI systems that are safety components of products governed by existing EU product safety legislation (Annex I): medical devices, in vitro diagnostics, civil aviation, motor vehicles, marine equipment, railway, agricultural machinery, toys, pressure equipment, gas appliances, elevators.
Am I a provider, deployer, importer, or distributor?
The compliance obligations that apply to any organisation depend fundamentally on its role in the AI value chain. Getting this classification wrong is the most common error in AI Act gap analyses.
Provider (Art. 3(3)): An organisation that develops or has developed an AI system and places it on the market or puts it into service under its own name. Obligations include the full suite under Chapter III: risk management system, data governance, technical documentation, logging, transparency, human oversight, accuracy requirements, quality management system, conformity assessment, and registration.
Deployer (Art. 3(4)): An organisation that uses an AI system under its own authority. Deployers of high-risk systems have obligations under Art. 26: implement technical and organisational measures per provider instructions; ensure human oversight; report serious incidents; conduct fundamental rights impact assessments (for certain public sector use); do not use the system for purposes other than its intended use.
Importer (Art. 3(6)): An EU-established organisation that places an AI system from a third country on the EU market. Importers must verify that the provider has conducted conformity assessment and that CE marking is affixed before placing the system on the market.
Distributor (Art. 3(7)): An organisation in the supply chain that makes an AI system available on the EU market. Distributors must verify that the importer or provider has complied with applicable obligations before distributing.
Most Spanish companies using AI systems they did not build internally are deployers. Most companies building AI systems for internal use or for clients are providers. Many are both simultaneously. A company that develops an AI recruitment screening tool used across its own HR department is both provider (it developed the system) and deployer (it uses it). Both sets of obligations apply.
High-risk AI systems — the main compliance burden
Annex III of the regulation lists the categories of AI systems that are classified as high-risk. These are not theoretical edge cases. They describe systems commonly deployed by mid-market companies in Spain today.
Biometric identification and categorisation — Systems that identify or categorise individuals based on biometric data. A Spanish retailer deploying facial recognition for customer identification or emotion analysis in physical stores operates a high-risk system.
Critical infrastructure — AI systems used in safety management of critical digital, road, rail, water, gas, heating, or electricity infrastructure. An energy company using AI to predict transformer failures is high-risk.
Education and vocational training — Systems that determine access to educational institutions, assess students, monitor exam behaviour, or support learning. An edtech company using AI to grade assignments or screen university applicants is high-risk.
Employment and HR management — AI systems used for recruitment, CV screening, interview analysis, task allocation, monitoring of performance, or termination decisions. This is one of the most commercially significant categories: virtually every AI-powered HR SaaS product falls within it. A Spanish construction company using AI to score CVs and rank candidates for site management positions operates a high-risk system.
Access to essential private and public services — Creditworthiness assessment, insurance premium calculation, claims handling, public benefit eligibility. A fintech lender using AI for credit scoring, or an insurer using AI to set premiums, is operating high-risk AI under Annex III.
Law enforcement — AI used by police for individual risk assessment, polygraphs, prediction of criminal activity, profiling, and evidence analysis. These systems face the most stringent requirements and are primarily government-facing.
Migration, asylum, and border control — Systems assessing risk of irregular migration, verifying the authenticity of travel documents, or assessing applications for visas and asylum. Relevant for public sector operators and some private immigration consultancies.
Administration of justice — AI used by courts or alternative dispute resolution bodies to research facts and law, or to apply law. Relevant for legaltech companies whose tools are deployed by judicial authorities.
Democratic processes — AI used to influence elections or public opinion through targeted political advertising or manipulation. Relevant for political campaigning organisations and political advertising platforms.
Obligations for high-risk AI systems — what you actually have to do
If any of the above categories apply to your organisation, the following obligations under Chapter III of the regulation must be met before deploying or continuing to operate the system from August 2, 2026.
-
Risk management system (Art. 9): A continuous, iterative process — not a one-time document — for identifying known and reasonably foreseeable risks associated with the AI system. Document identified risks, select and apply mitigation measures, and test residual risk levels. The process must be updated throughout the system’s lifecycle.
-
Data governance (Art. 10): Training, validation, and test datasets must be subject to documented governance practices: data design choices, collection methods, relevance and representativeness assessment, identification of known gaps or shortfalls, examination for biases that could affect fundamental rights. Special category data (racial or ethnic origin, political opinions, biometric data) requires specific justification for inclusion.
-
Technical documentation (Art. 11 + Annex IV): Comprehensive documentation produced before the system is placed on the market. Covers: general description of the system; system design; monitoring, functioning, and control of the system; information on the training methodology, data, and performance; a detailed description of the risk management process; and all changes made over the system’s lifecycle. Documentation must be retained for the system’s operational lifetime plus ten years.
-
Automatic record-keeping / logging (Art. 12): High-risk AI systems must automatically log events relevant to identifying risks during normal operation, including the data input that triggered the result, the output of the system, and the date, time, and duration of each use. Logs must be retained for minimum six months.
-
Transparency to users (Art. 13): Deployers must be provided with instructions for use that are clear, complete, and accurate. These must include: the identity of the provider; the system’s capabilities and limitations; accuracy levels and performance metrics; foreseeable risks; technical measures for human oversight; the expected lifetime and maintenance.
-
Human oversight (Art. 14): The system must be designed and built to allow human oversight. This includes: ability for oversight personnel to understand the system’s capabilities and limitations; ability to override, stop, or interrupt the system; ability to identify anomalies, dysfunctions, and unexpected performance. The regulation does not require a human to approve every decision, but the mechanism for intervention must exist and be documented.
-
Accuracy, robustness, and cybersecurity (Art. 15): Appropriate levels of accuracy for the intended purpose, documented and communicated to deployers. Robustness against errors, faults, and inconsistencies during normal operation and foreseeable misuse. Cybersecurity measures proportional to the risk, including resilience against attempts by third parties to alter the system’s behaviour.
-
Quality management system (Art. 17): A documented quality management system covering: policies, processes, and instructions for ensuring compliance; procedures for technical documentation and record-keeping; design verification and validation; post-market monitoring.
-
Conformity assessment (Art. 43): For most Annex III high-risk systems, providers can conduct a self-assessment against the regulatory requirements. For biometric identification systems, conformity assessment must be conducted by a notified third-party body. The assessment must be documented and updated when the system is substantially modified.
-
EU declaration of conformity and CE marking: After passing conformity assessment, the provider must draw up an EU declaration of conformity under Art. 47 and affix CE marking. The declaration must be kept for ten years.
-
Post-market monitoring (Art. 72): Providers must have a post-market monitoring plan in place. After deployment, collect and review data on system performance, incidents, and misuse. Report serious incidents and near misses to national authorities via the EU-wide database.
GPAI and General-Purpose AI — what changed August 2025
From August 2, 2025, providers of GPAI models are subject to a distinct set of obligations under Chapter V of the regulation.
A GPAI model is a model trained on a large amount of data, which exhibits general capability and can be used for a wide range of tasks. The classification encompasses foundation models including GPT-4, Claude, Gemini, Llama, Mistral, and comparable systems.
All GPAI providers must:
- Maintain technical documentation per Annex XI covering model training methodology, architecture, capabilities, and limitations
- Adopt and document a copyright compliance policy in line with Directive 2019/790 on copyright
- Publish a sufficiently detailed summary of training content to allow rights holders to exercise their rights
GPAI models posing systemic risk — defined as models trained using more than 10^25 floating-point operations (FLOPs) of compute, or models designated by the Commission — face additional obligations: adversarial testing (red teaming), serious incident reporting, cybersecurity measures, and energy efficiency reporting.
What this means for Spanish companies that deploy GPAI but do not build it:
Most Spanish mid-market companies are GPAI deployers, not providers. If you use the OpenAI API, Anthropic’s Claude API, or Google Gemini in your products, the GPAI provider obligations fall on those companies, not on you. However:
- If you fine-tune a GPAI model on your data, you take on provider obligations for the resulting modified model.
- If you deploy a GPAI model in a high-risk context (HR screening, credit scoring, etc.), you remain responsible for all high-risk system obligations for the complete system under Art. 25 — your downstream system is subject to the full conformity assessment process.
- The GPAI provider’s compliance does not substitute for or reduce your own compliance obligations as a deployer.
A voluntary Code of Practice for GPAI providers was initiated by the Commission in 2025. Adherence to it is not legally required, but is treated as a compliance signal by national authorities.
AESIA — the Spanish enforcement authority
AESIA (Agencia Española de Supervisión de Inteligencia Artificial) was created by Real Decreto 729/2023, making Spain the first EU member state to establish a national AI supervisory authority — before the AI Act itself entered into force. The agency is headquartered in A Coruña, Galicia, and operates under the Ministry of Digital Transformation and Civil Service.
AESIA’s functions include:
- Market surveillance: investigating complaints, conducting ex officio inspections, requiring access to documentation and AI systems, and auditing compliance with the regulation.
- Sanctioning: issuing administrative sanctions up to the regulatory maximum fines for prohibited practices, high-risk violations, and misinformation to authorities.
- Regulatory sandboxes: operating the national AI regulatory testing environment established under Art. 57, allowing companies to develop and test AI systems under supervised conditions before market placement.
- EU coordination: participating in the European AI Board (Art. 65), coordinating enforcement with other national authorities, and contributing to the development of harmonised standards.
AESIA’s sectoral reach is complemented by existing regulators: the Banco de España and CNMV retain jurisdiction over AI in financial services; the AEMPS (Agencia Española de Medicamentos y Productos Sanitarios) for medical AI; the AEAT for tax-related AI systems.
Interaction with GDPR, DSA, DMA, and NIS2
The EU AI Act layers on top of existing EU frameworks rather than replacing them. The following table summarises the primary intersections relevant to Spanish mid-market companies:
| Framework | Primary Scope | Interaction with AI Act |
|---|---|---|
| GDPR (Regulation 2016/679) | Processing of personal data | AI systems processing personal data must comply with both GDPR (lawful basis, data minimisation, DPIA for high-risk processing) and the AI Act (data governance, logging, documentation). Automated decision-making under Art. 22 GDPR overlaps with AI Act transparency and human oversight obligations. |
| DSA (Regulation 2022/2065) | Online intermediaries and platforms | Recommender systems of very large online platforms (VLOP) face obligations under both DSA (algorithmic transparency, risk assessments) and AI Act (if classified as high-risk). |
| DMA (Regulation 2022/1925) | Designated digital gatekeepers | Gatekeepers using AI in self-preferencing, ranking, or interoperability must align obligations under both DMA and AI Act. Affects few Spanish companies directly but relevant for pan-EU platforms. |
| NIS2 (Directive 2022/2555) | Cybersecurity of essential and important entities | AI systems used in critical infrastructure (energy, transport, health, finance) must meet NIS2 cybersecurity requirements. The AI Act’s Art. 15 cybersecurity obligations for high-risk systems run in parallel. Both frameworks must be satisfied. |
| Sectoral legislation (MiCA, DORA, MDR) | Financial services, medical devices | AI Act forms an additional layer alongside sector-specific legislation. A credit scoring AI must comply with the AI Act, GDPR, and the applicable credit and consumer finance directives simultaneously. |
Practical compliance roadmap for a Spanish mid-market company (6-12 months)
The following roadmap assumes a company with two to five AI systems in scope, no existing AI compliance infrastructure, and a target of operational readiness by August 2, 2026.
Months 1-2: AI inventory and risk classification
Identify every AI system in the organisation, including third-party SaaS products and API integrations. Document for each: purpose, data inputs, outputs, who uses it, and how outputs affect people. Classify each system against the four risk tiers. Prioritise systems that interact with employment, financial, or access-to-services decisions. Estimated cost: EUR 5,000–15,000 if conducted with external legal/technical support.
Month 2-3: Legal and gap analysis
For each high-risk system identified, assess the gap between current state and regulatory requirements across all obligations (Art. 9–15, 17, 43, 47, 72). Produce a gap report with a prioritised remediation list. Identify whether the organisation acts as provider, deployer, or both for each system. Estimated cost: EUR 8,000–20,000.
Months 3-7: Controls implementation
Implement the controls identified in the gap analysis. The most resource-intensive are:
- Drafting and operationalising the risk management process (Art. 9): appoint an owner, define review frequency, build the risk register.
- Producing technical documentation (Annex IV): engage the development team to document the system comprehensively.
- Implementing logging and retention (Art. 12): validate that the technical infrastructure generates and stores the required logs.
- Drafting instructions for use (Art. 13): produce deployer-facing documentation.
- Building or formalising human oversight mechanisms (Art. 14): define escalation paths and override procedures.
Estimated cost: EUR 12,000–35,000 depending on number of systems and technical debt.
Month 5-8: Appointment of a responsible person and governance structure
Designate a responsible person for AI Act compliance. This need not be a dedicated full-time role in an SME — the CTO or a senior compliance officer can carry it — but the responsibility must be explicit, documented, and resourced. Establish a lightweight AI governance committee with representation from legal, technology, and operations.
Months 6-9: Staff training
Train relevant staff on AI Act requirements. Priority audiences: development teams building AI systems; HR, credit, and customer service teams that deploy high-risk AI; legal and compliance. Training should cover: how to classify AI systems, what documentation is required, how to report incidents, and what human oversight means in practice.
Month 8-10: Conformity assessment and documentation
Conduct conformity assessment for each high-risk system. For Annex III systems, this is typically self-assessment by the provider. Prepare the EU declaration of conformity and affix CE marking. Finalise all technical documentation and ensure it is stored with appropriate access controls and retention periods.
Months 10-12: Audit readiness and post-market monitoring
Conduct an internal mock audit against the regulatory requirements. Establish the post-market monitoring mechanism: define KPIs, set review frequency, assign responsibility for incident reporting. Register high-risk AI systems in the EU database where required.
Ongoing annual budget: EUR 10,000–25,000 for documentation updates, monitoring, incident reporting, and regulatory tracking.
Common misconceptions
“Our AI is minimal risk — we have no compliance obligations.” Minimal risk systems have no specific obligations under the AI Act. However, classification is not self-evident. Many organisations misclassify systems. A chatbot that determines the outcome of a complaint or affects access to services is not minimal risk. Perform the classification exercise rigorously before assuming low-risk status.
“ChatGPT Enterprise or Copilot covers our compliance.” OpenAI’s and Microsoft’s enterprise agreements cover their GDPR data processing obligations as processors. They do not transfer AI Act provider obligations related to your downstream systems. If you use GPT-4 to power an internal CV screening tool, you are the provider of that screening tool, and all high-risk obligations apply to you.
“GDPR compliance is sufficient for AI Act compliance.” GDPR and the AI Act are complementary but distinct. GDPR addresses data protection. The AI Act addresses the development, marketing, and use of AI systems. Risk management systems, conformity assessments, CE marking, and post-market monitoring have no equivalent in GDPR. Compliance with one does not imply compliance with the other.
“We’re an SME — the fines won’t apply to us at scale.” The penalty maximums are the higher of an absolute figure or a percentage of global turnover. For an SME with EUR 5M turnover, a 3% fine equals EUR 150,000 — a significant liability. More importantly, the regulatory risk to business operations and reputational damage from non-compliance is often more material than the fine itself.
“We’re not in a regulated sector, so we’re lower risk.” The AI Act’s risk classification is use-case based, not sector-based. An unregulated logistics company using AI to allocate deliveries to drivers could be operating a system that affects employment conditions — a high-risk use case under Annex III — even if logistics is not a regulated industry.
“We’ll wait until closer to the deadline.” Technical documentation for complex systems cannot be produced retroactively in a credible way. Risk management systems must demonstrate a continuous process, not a point-in-time snapshot. Building the governance infrastructure takes months, not weeks. Companies starting in Q4 2025 will face a difficult race to August 2026.
“Only companies that build AI are affected.” Deployers of high-risk AI systems have substantial obligations under Art. 26. Companies that purchase or licence AI systems for use in employment, credit, or essential services cannot discharge their compliance obligations by pointing to the provider. Deployer obligations include fundamental rights impact assessments, ensuring human oversight, reporting serious incidents, and restricting use to the system’s intended purpose.
“Our AI is just an internal tool, so it’s not ‘placed on the market’.” Art. 3(4) defines ‘deployer’ to cover any use of an AI system under the deployer’s own authority, except for personal non-professional activity. Internal use does not remove the application of the regulation to the deployer. It does mean the organisation does not need to satisfy ‘placing on the market’ requirements, but all deployer obligations under Art. 26 still apply.
Sources
- EUR-Lex — Regulation (EU) 2024/1689 — EU Artificial Intelligence Act
- BOE — Real Decreto 729/2023 — Creación de AESIA
- AESIA — Agencia Española de Supervisión de Inteligencia Artificial
- European Commission — EU AI Office: European approach to artificial intelligence
- EUR-Lex — Regulation (EU) 2016/679 — GDPR
- European Commission — European AI Board: guidance and technical standards
- ENISA — Artificial Intelligence Cybersecurity Challenges
- Banco de España — Supervisión de IA en el sector financiero
How abemon can help
Navigating the EU AI Act requires a combination of legal interpretation, technical implementation, and process design. abemon’s consulting practice offers structured AI governance engagements including AI system inventories, risk classification, gap analyses, and documentation programmes. Our AI and machine learning team handles the technical side: implementing logging architectures, documentation pipelines, and human oversight controls in production systems.
For companies that have not yet started their AI Act compliance programme, a structured readiness assessment is the practical first step. Contact us to discuss your organisation’s exposure and define a prioritised roadmap.
For broader context on the AI regulatory environment in Spain, see our 2025 quantitative report on the state of AI in Spain and our enterprise AI governance framework guide. For production deployment considerations, see AI agents in production: lessons learned.
Frequently asked questions
- When does the EU AI Act take full effect?
- Regulation (EU) 2024/1689 entered into force on August 1, 2024. Prohibited practices have been banned since February 2, 2025. GPAI model obligations apply from August 2, 2025. The bulk of provisions — risk classification, high-risk system requirements, conformity assessments — become enforceable on August 2, 2026. High-risk systems under Annex I (covering product safety directives such as medical devices and machinery) face an extended deadline of August 2, 2027.
- What are the penalties for non-compliance?
- Penalties are tiered by severity. Using prohibited AI practices (unacceptable risk) can result in fines of up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. Non-compliance with high-risk system obligations under Articles 8–15 carries fines of up to EUR 15 million or 3% of turnover. Supplying incorrect, incomplete, or misleading information to authorities is penalised by up to EUR 7.5 million or 1%. SME-specific proportionality provisions exist, but for a 50-person company even the minimum fine can be material.
- Who enforces the EU AI Act in Spain?
- The competent national authority is AESIA (Agencia Española de Supervisión de Inteligencia Artificial), created by Real Decreto 729/2023 and headquartered in A Coruña. Spain was the first EU member state to establish a dedicated AI supervisory body. AESIA holds market surveillance powers, investigates complaints, issues administrative sanctions, and participates in the European AI Board. Sectoral regulators (Banco de España, CNMV, AEMPS) retain jurisdiction over AI in their respective industries.
- Is my AI system high-risk?
- High-risk AI systems are defined in Annex III of Regulation (EU) 2024/1689. They include systems used in biometric identification, critical infrastructure safety, education and vocational training, employment and HR management (including CV screening), access to essential private and public services (including credit scoring), law enforcement, migration and border control, administration of justice, and democratic processes. If your company uses AI to screen job applicants, score credit applications, allocate insurance premiums, or prioritise access to public benefits, the system is almost certainly high-risk under Annex III.
- Does GDPR compliance cover EU AI Act compliance?
- No. GDPR and the EU AI Act are complementary but distinct frameworks with different scopes. GDPR governs the lawful processing of personal data. The AI Act adds a layer of requirements specifically for AI systems: risk management systems (Art. 9), technical documentation (Art. 11), automatic logging (Art. 12), conformity assessments (Art. 43), CE marking, and post-market monitoring (Art. 72). A company can be fully GDPR-compliant and still violate the AI Act if it lacks the mandatory technical documentation or has no risk management process.
- What is a GPAI system and does it affect me if I don't build one?
- GPAI (General Purpose AI) models are foundation models trained for a wide range of tasks — GPT-4, Llama, Mistral, Claude, and similar systems. From August 2, 2025, providers of GPAI models must produce technical documentation, maintain a copyright compliance policy, and publish a summary of training data. Most Spanish companies are GPAI deployers, not providers: if you consume GPAI APIs or build applications on top of them, the GPAI provider obligations fall on the model developer. However, if you deploy those models in high-risk contexts, you remain responsible for the compliance of your complete system under Art. 25 and the high-risk obligations.
- What technical documentation must I maintain?
- For high-risk AI systems, Art. 11 and Annex IV require documentation covering: general description of the system and its intended purpose; design logic, data requirements, and development methodology; information on training, validation and test data; performance metrics, known limitations, and operating conditions; risk mitigation measures; post-market monitoring plan; and a copy of the EU declaration of conformity. This documentation must be kept up to date throughout the system's lifecycle and made available to AESIA on request. The minimum retention period for logs under Art. 12 is at least six months, though sectoral rules may require longer.
- What does AI Act compliance cost approximately?
- Costs vary significantly by company size and the number of AI systems deployed. A mid-market company with two to five high-risk AI systems should budget between EUR 30,000 and EUR 80,000 for an initial compliance programme: AI inventory and classification (EUR 5,000–15,000), gap analysis and legal assessment (EUR 8,000–20,000), documentation and technical controls implementation (EUR 12,000–35,000), staff training and governance setup (EUR 5,000–10,000). Ongoing compliance (monitoring, documentation updates, audit readiness) runs approximately EUR 10,000–25,000 per year. Companies with no existing compliance infrastructure, or with AI in highly regulated sectors such as finance or healthcare, should expect the upper end of these ranges.
Sources
- EUR-Lex — Regulation (EU) 2024/1689 — EU Artificial Intelligence Act (full text)
- BOE — Real Decreto 729/2023 — Creación de AESIA (Agencia Española de Supervisión de IA)
- AESIA — AESIA — Agencia Española de Supervisión de Inteligencia Artificial
- European Commission — European AI Office — EU AI Act implementation and guidelines
- EUR-Lex — Regulation (EU) 2016/679 — GDPR (General Data Protection Regulation)
- European Commission — European AI Board — Guidance and technical standards under the AI Act
- ENISA — ENISA — Artificial Intelligence Cybersecurity Challenges
- Banco de España — Banco de España — Supervisión de IA en el sector financiero
