EU AI Act Compliance: What US Companies Need to Know
If your mid-market company sells products or services to European customers, uses AI tools that process data from EU residents, or deploys AI systems whose outputs reach the European Union, the EU AI Act likely applies to you. This is not a distant regulatory concern reserved for Big Tech. The European Union's Artificial Intelligence Act has extraterritorial reach by design, and US companies in the $5M to $100M revenue range are squarely in its crosshairs.
The EU AI Act entered into force on August 1, 2024, with obligations phasing in through 2027. For US mid-market companies, the window to prepare is narrowing. This guide breaks down who the Act applies to, what it requires, and how to build a practical compliance strategy without a dedicated regulatory team.
Does the EU AI Act Apply to US Companies?
Yes. The EU AI Act applies to organizations outside the European Union when the output of their AI systems is used within the EU. This extraterritorial scope is modeled after the GDPR, and it catches more US companies than most realize.
Specifically, the Act applies to you if:
- You place AI systems on the EU market — meaning you sell, distribute, or make available AI-powered products or services to customers in the EU, regardless of where your company is headquartered.
- You deploy AI systems whose output is used in the EU — even if the AI system itself runs on US-based infrastructure, if its outputs (decisions, recommendations, classifications) affect people or processes in the EU, you are in scope.
- You are a provider or deployer of AI systems — the Act distinguishes between providers (those who develop or commission AI systems) and deployers (those who use AI systems in a professional capacity). Both carry obligations.
For a mid-market SaaS company with European customers, a manufacturing firm with EU supply chain partners, or a professional services company serving multinational clients, this is not hypothetical. If you have not yet assessed your exposure, an AI governance program is the logical starting point for mapping your AI systems against these regulatory triggers.
The Risk Classification Framework
The EU AI Act takes a risk-based approach, categorizing AI systems into four tiers. Your compliance obligations depend entirely on which tier your AI systems fall into.
Unacceptable Risk (Prohibited)
These AI practices are banned outright in the EU. Prohibitions took effect on February 2, 2025, and include:
- Social scoring systems used by governments or private entities to evaluate trustworthiness
- Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)
- AI systems that manipulate human behavior through subliminal techniques causing harm
- Exploitation of vulnerabilities related to age, disability, or social or economic situation
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
- Emotion recognition systems in workplaces and educational institutions
Most mid-market US companies will not be operating prohibited AI systems. But if you use AI-powered employee monitoring tools or behavioral analytics platforms, review them carefully against these categories.
High Risk
This is where the heaviest compliance burden falls, and where most mid-market companies need to pay attention. High-risk AI systems include those used in:
- Employment and worker management — AI used in recruitment, hiring decisions, performance evaluation, task allocation, or termination decisions
- Credit and insurance scoring — AI systems that assess creditworthiness or set insurance premiums
- Critical infrastructure management — AI in energy, water, transportation, or digital infrastructure
- Education and vocational training — AI that determines access to education or evaluates student performance
- Law enforcement and border control — AI used for risk assessments, polygraphs, or evidence evaluation
- Biometric identification and categorization — Remote biometric identification systems (other than those that are prohibited)
If your company uses AI-powered HR tools, automated underwriting systems, or AI-driven infrastructure monitoring with EU touchpoints, these systems likely qualify as high risk. The obligations are substantial: risk management systems, data governance, technical documentation, transparency measures, human oversight, accuracy and robustness requirements, and a conformity assessment before placing the system on the market.
Limited Risk
AI systems with limited risk carry transparency obligations. Users must be informed that they are interacting with an AI system. This applies to:
- Chatbots and conversational AI
- AI-generated content (deepfakes, synthetic media)
- Emotion recognition systems (those not prohibited)
- Biometric categorization systems (those not prohibited)
If your company deploys customer-facing chatbots or uses generative AI to produce content for EU audiences, you need clear disclosure mechanisms.
Minimal Risk
AI systems that pose minimal risk — such as spam filters, AI-enabled video games, or inventory management systems — face no specific obligations under the Act beyond voluntary codes of conduct. However, general-purpose AI models (like large language models) have their own separate set of requirements regardless of risk tier.
General-Purpose AI Model Obligations
The EU AI Act created a dedicated framework for general-purpose AI (GPAI) models, which took effect on August 2, 2025. If your company fine-tunes, deploys, or integrates foundation models or large language models, pay attention.
All GPAI model providers must:
- Maintain up-to-date technical documentation
- Provide information and documentation to downstream providers integrating the model
- Establish policies to comply with EU copyright law
- Publish a sufficiently detailed summary of training data content
GPAI models with systemic risk (those trained with more than 10^25 FLOPs of compute, or designated by the European Commission) carry additional obligations including model evaluations, adversarial testing, cybersecurity protections, and serious incident reporting.
For most mid-market companies, you are deployers rather than providers of GPAI models. But your obligations still include understanding what models you are using, how they are integrated into your workflows, and whether your AI vendor risk assessments adequately cover your supply chain exposure. Tracking which AI models power which business processes is exactly the kind of visibility that Radius360 is built to provide — mapping AI systems to regulatory requirements across your technology stack.
Key Compliance Obligations for High-Risk AI Systems
If any of your AI systems fall into the high-risk category, here is what the EU AI Act requires. These obligations apply in full starting August 2, 2026.
Risk Management System
You must establish, implement, and maintain a continuous risk management system throughout the AI system's lifecycle. This includes identifying and analyzing known and foreseeable risks, estimating and evaluating risks that may emerge, and adopting risk mitigation measures. If you have already built an AI risk management program, you are ahead of the curve.
Data Governance
Training, validation, and testing datasets must meet quality criteria. You need documented data governance practices covering data collection processes, data preparation operations, relevant assumptions, prior assessments of data availability and suitability, and bias detection and mitigation.
Technical Documentation and Record-Keeping
High-risk AI systems require detailed technical documentation demonstrating compliance. You must also maintain automatic logging capabilities that record system events, inputs, outputs, and operational parameters for traceability.
Transparency and Information to Deployers
High-risk AI systems must be designed to be sufficiently transparent for deployers to interpret outputs and use the system appropriately. Instructions for use must include the provider's identity, system characteristics, performance metrics, known limitations, and human oversight measures.
Human Oversight
High-risk AI systems must be designed to allow effective human oversight. This means humans must be able to understand the system's capabilities and limitations, monitor operation, interpret outputs, and intervene or override the system when necessary.
Accuracy, Robustness, and Cybersecurity
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. This includes resilience against errors, faults, adversarial attacks, and data poisoning. For cybersecurity leaders, this is where AI governance directly intersects with your existing security program.
EU AI Act Compliance Timeline
The Act phases in over a multi-year period. Here are the dates that matter:
- February 2, 2025 — Prohibitions on unacceptable-risk AI practices take effect
- August 2, 2025 — Obligations for GPAI models take effect; governance structure (AI Office, AI Board) becomes operational
- August 2, 2026 — Most provisions take effect, including high-risk AI system obligations for systems regulated under existing EU product safety legislation
- August 2, 2027 — High-risk AI system obligations for systems listed in Annex III (standalone high-risk systems like HR tools, credit scoring) take full effect
For US mid-market companies, the practical timeline is tighter than these dates suggest. Building the governance infrastructure, conducting AI system inventories, performing risk assessments, and implementing technical controls takes 12 to 18 months for most organizations. If you have not started, the time to begin is now.
Penalties for Non-Compliance
The EU AI Act carries significant financial penalties, scaled to the severity of the violation:
- Prohibited AI practices: Up to 35 million euros or 7% of global annual turnover, whichever is higher
- High-risk AI system violations: Up to 15 million euros or 3% of global annual turnover
- Supplying incorrect information to authorities: Up to 7.5 million euros or 1% of global annual turnover
For SMEs and startups, the Act provides proportionate caps, but the penalties remain substantial relative to mid-market revenue. More importantly, non-compliance creates market access risk. If your AI systems cannot meet EU requirements, you may lose the ability to serve EU customers entirely.
How the EU AI Act Intersects with US Frameworks
US companies do not operate in a regulatory vacuum when it comes to AI. Several US frameworks align with — but do not perfectly map to — the EU AI Act. Understanding these intersections helps you build a compliance program that satisfies multiple regulatory requirements simultaneously.
NIST AI Risk Management Framework (AI RMF)
The NIST AI RMF is the closest US analog to the EU AI Act's risk management requirements. Its four core functions — Govern, Map, Measure, and Manage — provide a structure for identifying AI risks, assessing their impact, and implementing controls. If you have adopted the NIST AI RMF, you have a strong foundation for EU AI Act compliance, though gaps remain around the Act's specific conformity assessment and registration requirements.
State-Level AI Legislation
Colorado's AI Act (SB 24-205), effective February 2026, imposes obligations on deployers of high-risk AI systems that parallel several EU AI Act requirements, including impact assessments, risk management, and transparency. Illinois, Texas, and other states have introduced or enacted AI-focused legislation. Building a unified regulatory compliance program that addresses both EU and emerging US state requirements is far more efficient than treating each regulation separately.
Federal AI Executive Orders and Agency Actions
While the US federal approach to AI regulation remains fragmented compared to the EU's comprehensive framework, sector-specific guidance from agencies like the FTC, EEOC, CFPB, and FDA creates obligations that overlap with EU AI Act requirements in areas like employment, consumer protection, and healthcare. A strong AI governance foundation addresses the common core of all these requirements.
SEC and Financial Reporting
For publicly traded mid-market companies or those preparing for capital events, AI-related risks and regulatory compliance are increasingly material disclosure topics. Your AI governance program directly feeds into risk reporting and audit readiness.
Building Your EU AI Act Compliance Strategy
For a mid-market company without a dedicated AI compliance team, here is a practical roadmap. This is not legal advice — consult qualified legal counsel for jurisdiction-specific guidance. This is a practitioner's framework for getting organized.
Step 1: Inventory Your AI Systems
You cannot comply with regulations you do not understand, and you cannot assess risk on AI systems you do not know about. Start with a comprehensive inventory of every AI system your company develops, deploys, procures, or integrates. Include third-party AI embedded in SaaS tools, vendor platforms, and internal automation. The shadow AI problem is real — most organizations significantly undercount their AI footprint.
Step 2: Assess EU Touchpoints
For each AI system in your inventory, determine whether its outputs affect EU residents, whether it processes data from EU sources, and whether it is available to EU customers. This determines whether the EU AI Act applies to that specific system.
Step 3: Classify Risk Tiers
Map each in-scope AI system to the Act's risk classification tiers. Focus your compliance investment on high-risk systems first. Our AI governance checklist provides a structured framework for working through this classification exercise.
Step 4: Gap Assessment
Compare your current controls, documentation, and governance practices against the Act's specific requirements for each risk tier. Identify gaps in risk management, data governance, technical documentation, transparency, human oversight, and cybersecurity.
Step 5: Remediation Roadmap
Prioritize gaps based on the compliance timeline and the severity of potential penalties. Build a phased remediation plan that aligns with your budget and operational capacity. For companies without a full-time CISO, a virtual CISO can lead this effort, providing the strategic oversight needed without the cost of a permanent executive hire.
Step 6: Implement Ongoing Governance
EU AI Act compliance is not a one-time project. The Act requires continuous risk management, regular monitoring, and updated documentation throughout the AI system lifecycle. Establish governance processes, assign responsibilities, and implement tooling — like Radius360 — to maintain visibility into your AI systems, track compliance status, and manage regulatory requirements as they evolve.
What This Means for Your Cybersecurity Program
The EU AI Act's cybersecurity requirements for high-risk AI systems are not separate from your existing security program. They are an extension of it. The Act requires resilience against adversarial attacks, data integrity protections, access controls, and incident response capabilities — all things your cybersecurity program should already address.
The opportunity for security leaders is to position AI governance as a natural evolution of your cybersecurity program rather than a separate compliance silo. The same risk management discipline, control frameworks, and audit practices that drive your security program can be adapted to meet AI regulatory requirements.
If you are unsure where your organization stands, a free cybersecurity assessment can help identify gaps in your current security posture that would also affect EU AI Act readiness.
Does the EU AI Act apply to small and mid-market US companies?
Yes. The EU AI Act applies based on where AI system outputs are used, not on company size. If your company sells products or services to EU customers, processes data from EU residents using AI, or deploys AI systems whose decisions affect people in the EU, the Act applies to you regardless of your revenue or employee count. The Act does include proportionate penalty caps for SMEs, but the compliance obligations themselves are the same. Mid-market companies with EU-facing operations should assess their exposure now rather than assuming the Act only targets large technology companies.
What are the biggest penalties under the EU AI Act?
The maximum penalty for deploying a prohibited AI system is 35 million euros or 7% of total worldwide annual turnover, whichever is higher. For violations related to high-risk AI system requirements, fines can reach 15 million euros or 3% of global turnover. Providing incorrect, incomplete, or misleading information to regulatory authorities carries fines of up to 7.5 million euros or 1% of global turnover. Beyond financial penalties, non-compliance can result in AI systems being pulled from the EU market, creating significant business disruption and reputational damage.
How does the EU AI Act differ from US AI regulations?
The EU AI Act is a comprehensive, horizontal regulation that applies across all sectors, while US AI regulation remains a patchwork of federal guidance, executive orders, and state-level legislation. The EU Act mandates a risk classification system with specific obligations for each tier, requires conformity assessments for high-risk AI, and creates centralized enforcement through national competent authorities and the EU AI Office. In contrast, the US approach relies on sector-specific agency guidance (FTC for consumer protection, EEOC for employment, etc.) and voluntary frameworks like the NIST AI RMF. Companies operating in both markets need a compliance strategy that addresses both approaches.
When do US companies need to be compliant with the EU AI Act?
The compliance timeline depends on what type of AI systems you operate. Prohibited AI practices were banned as of February 2, 2025. GPAI model obligations took effect August 2, 2025. Most high-risk AI system obligations for systems covered by existing EU product safety legislation take effect August 2, 2026, with standalone high-risk systems (such as HR and credit scoring AI) following on August 2, 2027. However, building the necessary governance infrastructure, documentation, and controls typically takes 12 to 18 months. Companies that have not started should begin their compliance program immediately to avoid a rushed and costly implementation.
Can existing cybersecurity frameworks help with EU AI Act compliance?
Absolutely. Existing cybersecurity frameworks like NIST CSF, ISO 27001, and SOC 2 provide a strong foundation for meeting the EU AI Act's cybersecurity and risk management requirements. The Act requires high-risk AI systems to be resilient against adversarial attacks, maintain data integrity, implement access controls, and support incident response — all controls that mature cybersecurity programs already address. The NIST AI RMF specifically maps well to the Act's risk management requirements. Companies with established cybersecurity programs should extend their existing controls to cover AI-specific risks rather than building a separate compliance program from scratch. This integrated approach is more efficient and more sustainable over time.
Related services