AI Regulations: From the EU AI Act to the U.S. Patchwork
- Elle

- Dec 24, 2025
- 9 min read

Artificial intelligence is advancing faster than most people can keep up with. ChatGPT writes essays. AI generates realistic images and videos. Algorithms decide who gets job interviews, loan approvals, and medical diagnoses. Self-driving cars navigate city streets. Facial recognition systems identify people in crowds.
And governments around the world are trying to figure out: should we regulate this? And if so, how?
The EU says yes, with the world's first comprehensive AI law that went into effect in 2024. The U.S. says... maybe, kind of, it depends, with a patchwork of guidelines, executive orders, and state laws but no overarching federal regulation. China says absolutely, but primarily for reasons of state control and national security. Other countries are somewhere in between, debating what approach makes sense.
This isn't a simple story with obvious good guys and bad guys. There are legitimate arguments on multiple sides about whether AI regulations help or hurt innovation, protect people or stifle progress, prevent harm or create it. Let's break down what AI regulations actually are, where they exist, and why smart people disagree so strongly about whether they matter.
What Are AI Regulations?
AI regulations are rules, laws, or guidelines that govern how artificial intelligence systems can be developed, deployed, and used. They might address things like:
What AI applications are allowed or banned (like facial recognition in public spaces or AI-powered social scoring systems)
How AI systems must be tested and documented before deployment
What information companies must disclose about how their AI works
Who is liable if an AI system causes harm
What data can be used to train AI models
How to prevent bias and discrimination in AI decision-making
Requirements for human oversight of AI systems in critical applications
-These regulations can be comprehensive (covering all AI applications) or sector-specific (focused on particular uses like healthcare, finance, or law enforcement). They can be mandatory laws with penalties for violations, or voluntary frameworks that companies can choose to follow.
The EU AI Act: The First Comprehensive Framework
The European Union's AI Act, which entered into force in August 2024, is the world's first broad regulatory framework specifically for artificial intelligence.
The Act uses a risk-based approach, categorizing AI systems into tiers:
Unacceptable Risk (Banned): AI applications deemed too dangerous are prohibited entirely. This includes:
AI systems that manipulate people's behavior in ways that cause harm
Social scoring systems that classify people based on behavior or personal characteristics
Real-time biometric identification in public spaces (with limited exceptions for law enforcement)
Emotion recognition AI in schools and workplaces
AI that exploits vulnerabilities of specific groups
High Risk (Heavily Regulated): AI systems used in critical areas face strict requirements. These include AI used in:
Critical infrastructure (transportation, energy, water)
Education and employment (determining school admissions or who gets hired)
Law enforcement and border control
Healthcare and medical devices
Financial services
High-risk systems must undergo conformity assessments, maintain detailed documentation, ensure data quality, implement human oversight, and register in an EU database.
Limited Risk (Transparency Requirements): AI-like chatbots must disclose that they're AI. Deepfakes and AI-generated content must be labeled as such.
Minimal or No Risk: Most AI applications (like spam filters or video games) face no specific requirements beyond existing laws.
The Act also created special rules for general-purpose AI models (like GPT-4 or Claude). High-impact models that might pose systemic risks must undergo thorough evaluations and report serious incidents to the European Commission. Implementation is phased. Some provisions started in February 2025 (bans on unacceptable-risk systems). The full framework becomes enforceable by August 2026, with some exceptions extending to 2027 and 2030.
Violations can result in fines up to €35 million or 7% of global annual turnover, whichever is higher.
The United States: A Patchwork Approach
The U.S. has no comprehensive federal AI law. Instead, it's using a combination of:
Executive Orders: In 2023, President Biden issued an executive order on "Safe, Secure, and Trustworthy AI" establishing guidelines and requiring federal agencies to develop sector-specific AI policies. In 2025, President Trump issued a new executive order emphasizing American AI dominance and a lighter regulatory touch, revoking Biden's order.
Sector-Specific Regulations: Existing laws governing healthcare, finance, employment, and other sectors already apply to AI used in those areas. For example, if an AI system discriminates in hiring, existing anti-discrimination laws apply.
Voluntary Frameworks: The National Institute of Standards and Technology (NIST) has published AI risk management frameworks that companies can choose to adopt.
State Laws: Individual states are passing their own AI regulations. Colorado enacted the first comprehensive state AI Act in 2024 (effective February 2026), requiring risk assessments and transparency for high-risk automated decision systems. California passed 18 different AI-related bills in 2024, addressing everything from deepfakes to AI in healthcare to required disclosure labels.
The U.S. approach reflects a philosophy of letting innovation flourish while using existing legal frameworks to address harms as they arise, rather than creating comprehensive pre-emptive regulation.
Other Countries' Approaches
United Kingdom: Like the U.S., the UK favors a sector-specific approach, regulating AI through existing frameworks rather than creating comprehensive new AI laws. The government has published principles and guidelines but hasn't enacted binding legislation.
China: Has taken an active regulatory approach focused on state control and national security. China's regulations require pre-approval of algorithms, mandate alignment with state values, and give authorities significant oversight of AI development. The emphasis is on maintaining social stability and government authority alongside fostering AI innovation for economic growth.
Canada: Proposed the Artificial Intelligence and Data Act (AIDA) in 2022, which would create a risk-based framework similar to the EU's approach but tailored to Canadian values. It's still under development, not expected to be fully implemented before 2026.
Developing Countries: Many nations are watching what the EU, U.S., and China do before committing to their own approaches, recognizing that AI governance will likely converge around certain international standards.
The Case FOR AI Regulations
Supporters of AI regulation make several arguments:
1. Preventing Harm Before It Happens
AI systems can cause real damage. Biased algorithms have denied people loans, jobs, and housing based on race or gender. Faulty medical AI has misdiagnosed patients. Autonomous vehicles have killed pedestrians. Deepfakes have ruined reputations and been used for fraud.
Regulation advocates argue that waiting until people are harmed and then suing companies isn't enough. By the time courts settle a case, thousands more people might be affected. Proactive regulation can prevent systematic harms.
2. Addressing Power Imbalances
AI development is concentrated in a handful of major tech companies. Individual users have little ability to understand, challenge, or opt out of AI systems that affect their lives. Regulations can level the playing field by requiring transparency, accountability, and giving people rights regarding AI decisions that affect them.
3. Maintaining Public Trust
If people don't trust AI systems, they won't use them, which could slow beneficial innovation. Clear regulations can build trust by ensuring AI systems meet safety and ethical standards. The argument is that good regulation enables innovation by creating a stable, trusted environment.
4. Preventing Race-to-the-Bottom Dynamics
Without regulation, companies might cut corners on safety to get products to market faster than competitors. Regulation creates a floor that everyone must meet, preventing dangerous shortcuts.
5. Democratic Governance of Powerful Technology
AI is reshaping society. Regulation advocates argue that such powerful technology should be subject to democratic oversight and accountable to elected representatives, not just corporate boards pursuing profit.
6. International Standards and Compatibility
The EU AI Act is likely to influence global AI governance. Companies operating internationally may find it easier to comply with one comprehensive standard rather than navigating dozens of conflicting national rules. Early regulation can shape those global standards.
The Case AGAINST AI Regulations
Critics of AI regulation make equally compelling arguments:
1. Stifling Innovation
AI is developing rapidly. Regulation, by its nature, is slow and backward-looking. Critics worry that heavy regulation will slow down beneficial AI development, causing real harm by delaying medical breakthroughs, safety improvements, and productivity gains.
The concern is especially acute for startups and smaller companies that lack the resources to navigate complex compliance requirements. If only big tech companies can afford AI regulation compliance, the result might be less competition and more concentration of power.
2. Regulatory Capture
Large companies often shape regulations in ways that benefit them and create barriers for competitors. Critics worry that AI regulations will be written by and for incumbent tech giants, entrenching their dominance and making it harder for new entrants to challenge them.
3. Impossible to Get Right
AI technology is changing so fast that any specific regulations will be obsolete before they're fully implemented. What made sense to regulate in 2024 might be irrelevant by 2026. Flexible, principles-based approaches might work better than detailed prescriptive rules.
4. Existing Laws Are Sufficient
If an AI system discriminates, existing anti-discrimination laws apply. If it violates privacy, privacy laws apply. If it causes injury, product liability laws apply. Critics argue we don't need new regulations when existing legal frameworks can address AI-related harms.
5. Global Competitive Disadvantage
If the U.S. or EU heavily regulates AI while China doesn't (or regulates for different purposes), companies in more regulated jurisdictions might fall behind in AI development. This creates national security concerns if adversaries develop superior AI capabilities.
6. Unintended Consequences
Regulations often have effects their creators didn't anticipate. Requirements meant to increase safety might reduce it by forcing companies to use less effective but easier-to-explain methods. Transparency requirements might reveal proprietary information, destroying competitive advantages and reducing incentives to innovate.
7. The Definition Problem
What exactly is "AI"? The term is so broad that meaningful regulation is difficult. A simple if-then rule technically qualifies as AI. So does ChatGPT. Regulating "AI" might be as meaningless as regulating "software" or "algorithms"—it's just too broad a category.
The Middle Ground: Smart Regulation
Many experts advocate for a middle path that acknowledges both the need for some guardrails and the dangers of heavy-handed regulation.
This might involve:
Risk-Based Approaches: Regulate AI applications based on their potential for harm rather than treating all AI the same. This is what the EU AI Act attempts to do.
Outcome-Based Rather Than Process-Based Rules: Focus on what AI systems must achieve (accuracy, non-discrimination, safety) rather than prescribing exactly how to build them.
Regulatory Sandboxes: Allow companies to test innovative AI in controlled environments with regulatory oversight before full deployment. This enables innovation while maintaining safety.
Sunset Provisions: Include automatic expiration dates for regulations, forcing periodic review and updates to keep pace with technology.
International Cooperation: Work toward harmonized international standards so companies don't face conflicting requirements in different jurisdictions.
Transparency Without Revealing Trade Secrets: Require disclosure about what AI does and how it affects people, without forcing companies to reveal proprietary algorithms.
Real-World Impact: What's Happening Now
The debate isn't theoretical. Real consequences are already emerging:
Companies are adapting: Many tech companies are treating the EU AI Act as a de facto global standard, building systems that comply with it even for non-EU markets because it's easier than maintaining separate versions.
Startups are concerned: Smaller companies worry about compliance costs. A 2024 survey found that many AI startups are delaying certain products or avoiding high-risk applications entirely because of regulatory uncertainty.
Enforcement is uncertain: The EU has created new enforcement bodies, but how aggressively they'll enforce the AI Act remains to be seen. The GDPR (EU's privacy law) was criticized for weak enforcement initially.
State laws are proliferating: Colorado's AI Act is likely a template for other states. This could create a compliance nightmare for companies operating nationally.
Innovation continues: Despite regulatory concerns, AI development hasn't slowed. ChatGPT, Claude, and competing models keep improving. New AI applications keep launching. Whether this is despite regulation or because current regulation remains light is debated.
Why This Matters to You
Even if you're not building AI systems, AI regulations affect you because:
AI is making decisions about your life: Whether you realize it, algorithms influence what jobs you're offered, what loans you get approved for, what medical treatments are recommended, what content you see online, and more. Regulations determine whether you have rights to understand, challenge, or opt out of these decisions.
Innovation affects what's available: If regulations slow AI development, beneficial applications might arrive later or not at all. If regulations are too weak, unsafe or biased systems might cause harm.
Your data is involved: Many AI systems are trained on public data, potentially including your social media posts, photos, writings, or other content. Regulations determine whether companies need your permission and how they can use that data.
Democratic governance: These regulations represent society's attempt to steer a powerful technology toward beneficial uses and away from harmful ones. Your voice, through voting and public input, shapes those decisions.
The Bottom Line
AI regulations are attempts to govern how artificial intelligence is developed and used. They range from comprehensive frameworks like the EU AI Act to sector-specific rules to voluntary guidelines.
Whether these regulations are good or bad isn't a simple question. There are legitimate concerns about AI causing harm through bias, privacy violations, safety failures, and concentration of power. There are equally legitimate concerns about regulation stifling innovation, creating competitive disadvantages, and being too rigid for fast-moving technology.
The reality is probably that some regulation is necessary (few people argue for zero rules), but getting the details right is incredibly difficult. We're in an experimental phase where different jurisdictions are trying different approaches, and we won't know what works best for years.
What's certain is that AI is becoming more powerful and more integrated into daily life. How we regulate it, whether through comprehensive laws or light-touch frameworks, will shape what AI can do, who benefits from it, who gets hurt by it, and whether innovation thrives or stagnates.
The debate isn't about whether AI matters. It's about what governance approach will maximize benefits while minimizing harms, and reasonable people can disagree about where that balance lies.
Sources
European Commission. (2024). Artificial Intelligence Act. Retrieved from https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
European Parliament. (2025). EU AI Act: first regulation on artificial intelligence. Retrieved from https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Congressional Research Service. (2025). Regulating Artificial Intelligence: U.S. and International Approaches. Retrieved from https://www.congress.gov/crs-product/R48555
Brookings Institution. (2024). The EU and U.S. diverge on AI regulation: A transatlantic comparison. Retrieved from https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/
White & Case. (2025). AI Watch: Global regulatory tracker. Retrieved from https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-european-union
Anecdotes AI. (2025). AI Regulations in 2025: US, EU, UK, Japan, China & More. Retrieved from https://www.anecdotes.ai/learn/ai-regulations-in-2025-us-eu-uk-japan-china-and-more
Smith Anderson Law. (2025). The Future of AI Compliance—Preparing for New Global and State Laws. Retrieved from https://www.smithlaw.com/newsroom/publications/the-future-of-ai-compliance-preparing-for-new-global-and-state-laws



Comments