2025.05.21
EU AI Act Enforcement Begins: Prohibited AI Practices Now Binding
- Introduction
The European Union Artificial Intelligence Act (AI Act) entered into force on August 1, 2024, marking a significant milestone in global AI regulation. As the world’s first comprehensive legal framework for artificial intelligence, the Act introduces a phased implementation approach to ensure AI safety, transparency, and accountability.
As of February 2, 2025, the first phase of the AI Act’s implementation took effect, making legally binding the provisions prohibiting AI practices classified as posing an unacceptable risk. These restrictions apply to companies within the EU as well as businesses outside the EU if their AI systems are placed on the EU market or impact individuals in the EU.
- Scope of Application: Who Must Comply?
The AI Act applies to a wide range of AI providers and users, both within and outside the EU. The regulation explicitly covers:
-
- AI providers placing AI systems on the EU market, regardless of whether they are based in the EU or a third country.
- Companies deploying AI systems within the EU, meaning that businesses using AI solutions in Europe must ensure compliance.
- Non-EU companies whose AI system’s output is used in the EU, even if the system is operated from outside the EU.
- Importers and distributors of AI systems in the EU, who are responsible for ensuring compliance with the Act.
- Manufacturers integrating AI into their products and making them available in the EU market.
- Authorized representatives of non-EU AI providers, ensuring that foreign companies with EU-based agents adhere to the Act.
- Affected persons located in the EU, ensuring that AI applications impacting European citizens are subject to the Act.
This extraterritorial application follows the approach of the General Data Protection Regulation (GDPR), reinforcing the EU's role in setting global regulatory standards for AI.
- Prohibited AI Practices Now Legally Binding
The AI Act strictly bans certain AI applications that are deemed to pose an unacceptable risk to fundamental rights and public safety. As of February 2, 2025, the following AI systems are prohibited:
-
- Subliminal, manipulative, or deceptive technique – AI systems that distort human behavior in a way that can cause harm.
- Exploitation of vulnerabilities – AI systems designed to take advantage of individuals’ age, disability, or socioeconomic status.
- Social scoring – AI systems that classify or rank individuals based on personal behavior, leading to discrimination.
- Assessing the risk of criminal offences – AI systems that assess the risk of an individual committing a criminal offence, based solely on profiling or assessing their personality traits and characteristics.
- Creation of facial recognition databases – AI systems that create facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.
- Emotion inference – AI systems that infer emotions in the workplace and educational institutions, with only limited exceptions for medical or safety reasons.
- Inference of sensitive attributes based on biometric categorization – AI systems that infer race, political beliefs, trade union membership, religious or philosophical beliefs, sex life or sexual orientation based on biometric categorization.
- Real-time remote biometric identification in public spaces – The use of AI-powered facial recognition and biometric tracking, with only limited exceptions for law enforcement under judicial authorization.
These bans apply not only to companies operating in the EU but also to non-EU companies that place AI systems on the EU market or whose AI models affect EU citizens. Any violation of these prohibitions can result in significant financial penalties, with fines reaching up to €35 million or 7% of global annual turnover.
- Upcoming Enforcement Milestones
The AI Act’s implementation follows a phased approach, with additional compliance obligations coming into force over the next two years. The key upcoming deadlines include:
-
- May 2025 – The EU AI Office will be formally established to oversee compliance and enforcement.
- August 2025 – The EU will finalize regulations for General-Purpose AI (GPAI) models, setting transparency and accountability requirements for large-scale AI systems such as foundation models. However, providers who have already launched GPAI models on the market at that time will only have to comply by August 2027.
- August 2026 – Compliance requirements for high-risk AI applications will become mandatory, including risk assessments, transparency obligations, and conformity assessments before deployment.
- August 2027 – Full enforcement of all AI Act provisions, including sector-specific obligations and final regulatory frameworks.
- Steps for Businesses to Ensure Compliance
Given the AI Act’s broad extraterritorial reach, companies—both within and outside the EU—should take immediate steps to assess and mitigate compliance risks.
-
- Identify AI systems that fall under the regulation – Companies must determine whether their AI models are subject to the Act, based on their market presence or impact on EU individuals.
- Review AI products and services for prohibited practices – Ensuring compliance with the bans that are now legally enforceable is critical.
- Prepare for upcoming high-risk AI obligations – Businesses developing AI for sensitive sectors such as healthcare, finance, law enforcement, and recruitment should begin risk assessments and transparency preparations.
- Monitor EU regulatory guidance – The EU AI Office and national enforcement authorities will issue further clarifications on compliance expectations in the coming months.
- Conclusion
The EU AI Act is now in force, with prohibited AI practices already enforceable. However, its broad application beyond the EU means that companies worldwide must assess their regulatory obligations. AI providers, deployers, and businesses that interact with the European market should take proactive steps now to ensure compliance, avoid regulatory risks, and align with emerging global AI governance standards.
For further legal guidance on how the AI Act may affect your business or assistance in navigating compliance requirements, please reach out.
By: Michael Mroczek, Attorney at Law (Switzerland)
Osamu Tosha, Attorney at Law
- Nozomi Sogo Attorneys at Law
Nozomi Sogo is a full-service law firm having its offices in Tokyo and Los Angeles, providing Japan’s and international leading businesses with a wide range of legal services, including corporate and M&As, business disputes, international arbitration, crisis management, white collar criminal defense, antitrust, entertainment, intellectual property. With speed, integrity, passion, and strong teamwork, we strive to help all of our clients realize “Nozomi,” Japanese for “Hope.”
https://www.nozomisogo.gr.jp/e/