The dream of directly effective supra-national legislation, applying in exactly the same way in each EU Member State: an EU Regulation should (in theory) provide the same protections in the same way at the same time to all EU citizens. As is ever the case, theory and reality rarely align, and the EU AI Act is proving to be a case in point. While the core text Regulation EU 2024/1689 is rolling out to the same timetable in all Member States, there are many steps that Member States or their designated regulatory bodies need to undertake along the way.  

Against that backdrop, Member States are racing to build oversight architecture, publish guidance, and enforce new obligations. Some are ahead of the curve, while others are at risk of missing stipulated deadlines. In this snapshot, members of DLA Piper’s global AI practice group provide an update on the latest status in Germany, France, Spain, Italy, Netherlands, Belgium, and Ireland: what’s done, what’s delayed, what’s coming, and what it all means more broadly. 

Germany 

Germany’s national implementation of the EU AI Act remains underway. The Federal Ministry for Digital Transformation and Government Modernisation’s (Bundesministerium für Digitales und Staatsmodernisierung) current draft implementation act is dated 11 September 2025 (available here). Following the latest consultation with the Federal States (Länder) and associations, the federal government will introduce the draft act into the legislative process. The ministry has emphasised that it intends to move forward with the legislative process quickly, given that Germany has missed the 2 August 2025 implementation deadline.  

The key law of the draft implementation act, the “KI-Marktüberwachungsgesetz- und Innovationsförderungsgesetz (KI-MIG)”, designates the Federal Network Agency (Bundesnetzagentur (BNetzA)) as Germany’s main market surveillance authority. However, sector authorities would retain competence in their respective fields, and the Federal Financial Supervisory Authority (Bundesanstalt für Finanzdienstleistungsaufsicht, BaFin) is specifically intended to serve as the competent market surveillance authority for high-risk AI systems directly linked to regulated financial activities. For high-risk AI systems in regulated sectors (e.g., financial services), sector-specific authorities such as the Federal Financial Supervisory Authority (BaFin) remain responsible.  

Within the BNetzA, a “Koordinierungs- und Kompetenzzentrum (KoKIVO)” is planned to be established to support other competent authorities with their tasks under the EU AI Act. Also, the AI regulatory sandbox required by the EU AI Act is intended to be established by the BNetzA. Overall, based on the current draft of the implementation act, it is fair to say that the BNetzA will play a key role in enforcing the EU AI Act and ensuring its success in Germany. 

While the BNetzA is not yet officially competent, it has already launched the “KI-Service Desk” (see here) to support organisations in complying with the EU AI Act.  

Although the draft implementation act explicitly reflects the decision not to designate the Federal Commissioner for Data Protection and Freedom of Information (Bundesbeauftragte für den Datenschutz und die Informationsfreiheit, BfDI) as the marketsurveillance authority, it is worth noting that the BfDI (along with other German data protection authorities) has published multiple guidance documents on the interplay between the EU AI Act and the GDPR. 

What happens next? The further legislative process must be awaited. There are no changes expected with regard to the BNetzA being the main market surveillance authority. Therefore, we recommend regularly checking the BNetzA’s KI-Service Desk, as further guidance is to be expected.   

France 

France is not planning to modify its national laws to adapt them to the EU AI Act. However, France has moved to draft a national law to designate enforcement and oversight powers for the EU AI Act. The French model is decentralised, with multiple market surveillance authorities split by sector and type of AI system addressed in the EU AI Act. For instance, prohibited AI systems related to emotion recognition in the workplace and education institutions will be enforced by the French data protection supervisory authority (the CNIL) and high-risk AI systems related to medical devices by the ANSM, France’s National Agency for the Safety of Medicines and Health Products. 

The framework is complex but looks to uphold the intent of the legislation, which is similar in nature to product-based regulation. To help companies navigate this complex framework, the DGE has published a diagram (see here). The DGCCRF (Direction générale de la concurrence, de la consommation et de la répression des fraudes) notably in charge of fair market practices, consumer protection, and product safety across France, will be responsible for coordinating the market surveillance authorities and will, in this capacity, serve as the single point of contact pursuant to EU AI Act Article 70.2.

In parallel, the DGE (a French government agency under the Ministry of the Economy) will continue to support the strategy around the implementation of this text and represent France within the AI Office. Finally, an advanced pooling of expertise and technical tools in AI and cybersecurity is being implemented by the Digital Regulation Expertise Center (PEReN) and the National Cybersecurity Agency (ANSSI) to support authorities in their missions to monitor the compliance of AI systems. 

While the legislative text is in progress, initiatives like CNIL’s AI sandbox for public services are already producing concrete guidance, recommendations, and examples.  

At present, the DGE has adopted a strategy focused on guiding companies toward compliance by prioritizing educational initiatives over punitive measures. In this context, the DGE is actively engaged with French professional associations and working groups to address and respond to the concerns raised by companies. 

Spain 

Spain is among the frontrunners in moving from plan to implementation. It has approved the Royal Decree 729/2023, approving the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which appoints AESIA as the national competent authority. Spain is also moving forward with its national regulatory sandbox initiative, providing legal guidance to twelve AI Providers operating high-risk AI systems.  

 In March 2025, the Spanish Government adopted a draft national AI Law designed to align national regulation with the EU AI Act. This legislation aims to foster sustainable technological development by promoting innovation while addressing potential risks associated with AI deployment. Although the draft law is still pending approval by the Spanish government, it underscores the government’s commitment to addressing the growing relevance of AI. The draft law aligns with many of the EU AI Act’s provisions (prohibitions, transparency, high-risk AI).   

In parallel with the progressive implementation of the AI Act, the Spanish Data Protection Authority (AEPD) continues to play a central role, especially in content labelling and algorithmic accountability, and in respect of AI systems that process biometric personal data. It has also publicly affirmed the intent to sanction any improper processing data by AI systems. Additionally, other regulatory bodies, such as the Spanish National Commission on Markets and Competition (CNMC) and the Central Electoral Commission are set to support AESIA in its supervisory functions, adopting a decentralised approach to enforcement.  

Spain is behind on the 2 August 2025 governance deadline, but its legislative momentum and clarity on certain obligations gives companies relatively good predictability. Further clarity is anticipated following the formal approval of the national AI Law and the issuance of respective guidelines by AESIA. The sentiment in Spain is positive; industry appears ready to adapt, there is public support for labelling and responsible AI, and the current challenges relate more to delays in implementation than resistance to the substance. 

Italy 

Italy has delivered a key milestone by enacting a full national AI law (effective 10 October 2025), ahead of many other territories. The law covers multiple fronts: specific rules and safeguards for the use of AI in healthcare; parental consent for minors (under 14) for AI usage; provisions for scientific research; the use of AI in the workplace; transparency and disclosure obligations regarding AI usage in the context of learned professions; copyright protection for works created with the assistance of AI systems, provided they result from the human author’s own intellectual contribution; the administration of justice and public administration; criminal penalties for harmful AI deepfakes and related misuse; and the establishment of a sizable public investment fund (roughly EUR 1 billion) for AI, telecoms, cybersecurity. 

Oversight responsibilities have been assigned: AgID (Agency for Digital Italy) and the National Cybersecurity Agency take central roles. While the first will be tasked with promoting innovation and managing AI conformity procedures, the second will oversee security, enforcement activities, and sanctions. Sectoral regulators (banks, insurance, etc.) will retain relevant powers. 

Italy’s law is now the domestic legal framework, meaning businesses have clearer rules than in many countries still in the drafting phase. The legislation is designed not to introduce new obligations beyond those established at the EU level and is formally aligned with the EU AI Act, ensuring regulatory consistency across jurisdictions. However, the law still lacks full harmonization with the European framework and will be supplemented by further decrees in the coming months. In particular, the Italian government is required to issue, within twelve months of the law’s entry into force, one or more legislative decrees establishing a comprehensive framework for the use of data, algorithms, and mathematical methods in AI system training, aligning national legislation the EU AI Act, and clarifying the rules regarding the development and unlawful use of AI systems. 

It’s significant that Italy has leapfrogged many other countries, with many in the tech and business community seeing it as setting a benchmark. Nonetheless, despite the introduction of national rules, significant areas of the regulatory landscape remain uncharted. Companies will need to remain vigilant, closely monitor market developments and regulatory updates, and continuously assess their adoption and governance practices. Operating this complex and evolving framework will be essential to ensure compliance, manage risks, and make informed strategic decisions. 

Netherlands 

Like a number of other European jurisdictions, the Dutch are expected to apply a sectoral approach to the supervision of AI. The role of coordinating surveillance authority is expected to be shared between the Dutch authority for Digital Infrastructure (RDI) along with the Dutch data protection authority (AP). These market surveillance authorities have not been formally appointed yet.  

In addition, the AP has been particularly active in preparing materials to support local application of the EU AI Act (for example, the consultations regarding prohibited AI practices in the spring and its periodic report on AI & Algorithms in the Netherlands). The AP continues to provide materials covering overlaps between AI and data protection requirements. In July, the AP published materials to support organisations with ensuring meaningful human intervention in algorithmic decision making.  

The Dutch government has also published a guide directed towards businesses that sets out a step-by-step approach to identifying applicable obligations under the EU AI Act.  

A proposal for the first Dutch regulatory sandbox has been published with a view to launching a sandbox in the Netherlands before the statutory deadline. The proposal was created off the back of collaboration with various Dutch supervisory authorities and ministries and under the coordination of the Dutch Ministry of Economic Affairs, the RDI and the AP. It covers desired the principles, process and roles for the design and running of the local sandbox. 

Belgium 

Belgium lags somewhat in formal institutional setup. While mapping work is underway (identifying which regulators might take which roles), there is no publicly confirmed single point of contact or final list of competent authorities. However, it is noted in the ‘Declaration of the Government’ (published 31 January 2025) that the Belgian Institute for Postal Services and Telecommunications (BIPT) will be appointed as the main regulator for the AI Act. 

The Belgian Data Protection Authority (APD/GBA) is active in issuing thematic materials and raising awareness. The FPS Economy is also doing explanatory work. Belgium has published the list of authorities in charge of the protection of fundamental rights under Article 77 of the AI Act (see here for details). Fundamental-rights oversight seems better specified than market surveillance or notifying authority roles so far. 

Belgium has missed the 2 August 2025 governance deadline, which means many obligations have arrived at the EU level without Belgium having complete institutional machinery in place. 

The sentiment in Belgium shows strong interest and high awareness, but government preparation has been slow and there is frustration among businesses about lack of clarity. Belgium’s strong GDPR tradition helps, but the practicalities (who to contact, liability, conformity pathways) remain opaque. 

Ireland 

Ireland appears to be among the most advanced in its preparations for implementation of the EU AI Act. Using a distributed regulatory model, the State (under the remit of the Department of Enterprise Trade and Employment) has officially designated 15 competent authorities spanning financial, regulatory, consumer, health, utility, and telecom sectors. It has also designated nine fundamental rights authorities pursuant to EU AI Act Article 77. Legislation establishing a National AI Office (to be known as Oifig Náisiúnta na hIntleachta Saorga) is making its way through the legislative process ensuring this central coordinating body is operational by 2 August 2026. 

Since the publication of the designated competent authorities, businesses now know which regulators will oversee which obligations. This enables organisations to begin compliance audits, risk assessments, formalising AI strategy and process organisational changes with more informed expectations of the AI regulatory landscape in their sector. 

What remains includes: establishing the National AI Office; standing up the AI sandbox; producing sector specific guidance to operationalize obligations under high-risk AI, transparency, etc; and monitoring enforcement consistency across authorities. 

The overall sentiment across sectors and in the public and private sphere is generally positive. Ireland’s clarity is praised by legal and tech commentators, and its approach is seen as balanced: protective, but not excessively onerous. There is also the sense that Ireland may serve as a model for states that are behind. Given the presence in Ireland of the European headquarters of numerous tech multinationals, as is the case in the area of data protection regulation and enforcement, Ireland’s competent authorities are expected to be active and pragmatic when it comes to AI supervision and enforcement.    

Elsewhere in the EU 

Outside these seven, several Member States are showing noteworthy progress, while others are more clearly suffering delays. 

  • Luxembourg has moved ahead of many similar-sized states: the national data protection authority (the CNPD) has been named in a bill which would make the designation as the default market surveillance authority, and plans for sectoral authorities are relatively well developed. 
  • Denmark adopted its national implementation law in May 2025, positioning itself as one of the early movers, especially around oversight and enforcement. 
  • Poland has draft legislation proposing a Commission for the Development and Safety of AI (the KRiBSI) as a market surveillance authority and the Minister of Digital Affairs as the notifying authority. 
  • Austria is publishing timelines, guidance, and service-desk style supports; many smaller EU states show similar patterns: strong awareness, but relatively slow formal designations. 
  • Hungary, Czechia, Romania and Bulgaria are in various stages of drafting or consultation, often held up by political or administrative transitions; legal texts often exist in draft (such as Czechia’s AI Act implementing law draft) but are not yet fully aligned or notified. 
  • The European Commission is evidently closely watching states that missed the August 2025 deadlines; infringement proceedings are widely expected if Member States do not designate the required authorities and communicate them to the Commission. Also, harmonised standards (through CEN/CENELEC) are becoming a key dependency for conformity and high-risk AI deployment. 

Comparison table  

Country National AI law enacted Competent authorities designated / communicated to the European Commission (deadline 2 August 2025) Single Point of Contact (SPOC) / central AI Office (deadline 2 August 2025) National AI Sandbox planned / in operation (deadline 2 August 2026) Sentiment / key risks 
Germany No (draft in progress) Not yet (market surveillance / notifying authority pending) The BNetzA is expected to become the SPOC Planned Striving for a balance between regulation and innovation 
France No (no plan to have national AI law) No (the bill of law needs to undergo legislative process) The DGCCRF (pending the vote of the law) Planned Strong will to accompany companies towards compliance 
Spain No (draft approved, not fully enacted)  Yes, AESIA as the head organization plus 21 supporting authorities designated  AESIA In operation High policy clarity;  approval of national law  pending 
Italy Yes (enacted in September 2025 and effective 10 October 2025) Need to formally communicate;  authorities designated AgID designated as notification authority; ACN designated as market surveillance authority and SPOC Planned Seen as a benchmark; watch enforcement in practice 
Netherlands No Some authorities known; formal appointment pending AP and RDI as coordinators; SPOC to be confirmed Planned Good clarity; business-friendly; awaiting final appointments  
Belgium No (no draft bill published yet) Fundamental rights authorities designated, market surveillance authorities are pending (BIPT is expected to lead) SPOC not confirmed Planned High awareness, but slowness in government action and business uncertainty 
Ireland No (draft bill making its way through the legislative process) Yes, 15 authorities designated National AI Office to be established by 2 August 2026 Planned Positive; presence of EU headquarters of many large technology companies means high level of engagement and scrutiny expected from regulators 

Takeaways and what’s next 

Member States that have met or are close to meeting the governance phase deadlines (Ireland, Italy) are in stronger compliance positions and provide better certainty for businesses; others risk regulatory mismatch. 

National laws (like Italy’s) accelerate the alignment with EU rules, reduce ambiguity, and may help avoid gaps, but only once supporting decrees, authorities and guidance are in place. 

The competence map (designating who does what in each country) is the biggest immediate piece. Businesses are especially watching which regulator handles high-risk AI, who acts on transparency and synthetic content, and how overlap with existing laws will be managed. 

DLA Piper’s AI Laws of the World guide provides a 2025 Q3 snapshot of AI laws and proposed regulations across more than 40 countries (including all 27 EU Member States), overview key legislative developments, regulations, proposed bills and guidelines issued by governmental bodies. The guide demonstrates that while there remains significant variation in regulatory approaches and attitudes across regions, some common thematic concerns are shared.  

If you have any questions on the EU AI Act, DLA Piper’s AI Laws of the World, or AI law and regulation in general, please do get in touch with the authors or your regular DLA Piper contact. 

Authors:  

  • General: Gareth Stokes, Liam Blackford 
  • France: Jeanne Dauzier, Maria Aouad 
  • Ireland: Claire O’Brien  
  • Italy: Giulio Coraggio, Giacomo Lusardi 
  • Germany: Jan Geert Meents, Lucas Blum 
  • Spain: Elisa Lorenzo Sánchez, Paula Gonzalez de Castejón Llano-Ponte 
  • Netherlands: Floris de Wit, Francesca Pole 
  • Belgium: Kristof De Vulder; Muhammed Demircan