EU AI Act for SaaS founders — what 2026 looks like
The EU AI Act is the most comprehensive AI regulation in any major market. It's law as of August 2024 with obligations phasing in through 2027 — and the penalty regime is more severe than GDPR's. Most founders building AI-touched SaaS apps will land in the "limited-risk" or "transparency" tier, which means a manageable disclosure-and-labeling regime. A minority will hit high-risk obligations that materially change what they can ship. Here's the operative framework.
Timeline and current status
The EU AI Act (Regulation (EU) 2024/1689) was adopted in August 2024 with phased implementation:
- February 2, 2025 — Prohibited AI practices (Art. 5) become enforceable.
- August 2, 2025 — Obligations for general-purpose AI (GPAI) providers + governance framework + penalty regime activate.
- August 2, 2026 — High-risk AI system obligations + transparency obligations (Art. 50) for deployers and providers become enforceable.
- August 2, 2027 — Obligations for AI systems used as safety components of regulated products (e.g., medical devices, vehicles).
We're now past the prohibited-practices and GPAI deadlines. The single biggest milestone for typical SaaS founders is August 2026 — that's when Article 50 transparency requirements take effect for any AI-touched app sold to EU users.
The four risk tiers
The AI Act uses a risk-based framework with four tiers, each with different obligations:
1. Prohibited (Article 5)
AI uses banned outright in the EU. Most SaaS founders won't touch these, but knowing the list matters because the penalties are the highest (up to €35M or 7% of global turnover):
- Subliminal manipulation, exploitation of vulnerabilities of specific groups
- Social scoring by public authorities
- Real-time remote biometric identification in publicly-accessible spaces by law enforcement (with narrow exceptions)
- Emotion recognition in the workplace and educational settings
- Untargeted scraping of facial images from the internet or CCTV to build facial-recognition databases
- Predictive policing based solely on profiling
- Biometric categorization to infer race, political opinions, religion, sexual orientation
2. High-risk (Article 6 + Annex III)
AI systems used in specific contexts that create significant risk to health, safety, or fundamental rights. The list includes:
- Employment, worker management, access to self-employment (CV screening, performance evaluation)
- Education and vocational training (admission scoring, exam evaluation)
- Essential private services (credit scoring, life/health insurance pricing, risk assessment for emergency calls)
- Law enforcement (excluding the Article 5 prohibitions)
- Migration, asylum, border control management
- Administration of justice and democratic processes
- Critical infrastructure (digital infrastructure, road traffic, water, gas, electricity)
- Safety components of products already covered by EU harmonization legislation (medical devices, vehicles, machinery)
High-risk systems trigger heavy obligations: risk management system, data governance, technical documentation, record-keeping, human oversight, accuracy/robustness/cybersecurity standards, registration in the EU AI database. If you're building in any of these verticals — even a small employment screening tool, an automated tutoring app, or a credit-decision feature — you need to evaluate Annex III carefully.
3. Limited-risk / Transparency (Article 50)
This is where most SaaS apps land. Four specific transparency obligations apply to AI systems that aren't prohibited or high-risk:
- Art. 50(1) — AI systems interacting with natural persons must disclose that the person is dealing with an AI (unless it's obvious from context). Chatbots, AI agents, virtual assistants — all need a clear disclosure on first interaction.
- Art. 50(2) — GPAI / generative AI providers must mark outputs as artificially generated or manipulated in a machine-readable format. Watermarking and content-credentials standards are emerging here (C2PA is the most-cited).
- Art. 50(3) — Emotion recognition or biometric categorization deployers must inform exposed individuals.
- Art. 50(4) — Deepfakes and other AI-generated/manipulated content must be labeled as such, with narrow exceptions for clearly artistic, satirical, or creative works.
4. Minimal risk (everything else)
AI systems that don't fall into the categories above (spam filters, recommendation engines that don't profile individuals harmfully, most internal tooling). No specific AI Act obligations — though other regimes (GDPR for personal data, EAA for accessibility, etc.) still apply.
What this means for typical AI-touched SaaS apps
Most apps built with Lovable / Cursor / Bolt / Replit will land in the limited-risk tier. The concrete obligations:
- If your app has any AI chatbot or AI agent that users interact with — even a help widget — you need a disclosure ("You're chatting with an AI") on the first interaction or by a clear UI affordance.
- If your app generates images, videos, audio, or text via AI and presents them to users, you need machine-readable provenance markers on those outputs. C2PA Content Credentials is the most widely-adopted standard.
- If your app does AI-based emotion analysis or biometric categorization (for example, sentiment analysis on user-submitted video), you need disclosure to the affected individuals.
- If your app handles AI-generated content created elsewhere (uploads, embeds), you have downstream labeling obligations under Art. 50(4).
GPAI provider obligations (if you're building your own model)
If you're a SaaS founder, you're almost certainly a deployer of GPAI, not a provider. The provider obligations (technical documentation, training-data summaries, copyright-respect policies) apply to OpenAI, Anthropic, Google, Meta, Mistral, etc. — not to you when you call their APIs.
However: if you fine-tune a model, distribute weights, or build a meaningfully-modified version of a GPAI, you may become a provider yourself under the Act's definitions. The threshold isn't fully settled in regulator guidance, but conservative practice for any operator who's training or fine-tuning is to maintain documentation as if you're a provider.
Penalty regime
The AI Act's penalty tiers are stricter than GDPR's:
Percentage is of global annual turnover, whichever is higher. SMEs and startups get a proportional reduction. The AI Office (newly established 2024) coordinates enforcement across member-state authorities; the first significant cases are expected mid-to-late 2026.
Related: GDPR vs CCPA — when each applies to your app →Practical compliance for the limited-risk tier
If your app falls in the limited-risk tier (most do), the work is modest:
- Add an AI disclosure to every AI-interactive surface. Plain text ("This is an AI assistant") is fine; over-engineering doesn't help.
- If you serve AI-generated images/audio/video, attach C2PA Content Credentials or equivalent machine-readable provenance.
- Document internally which AI systems you deploy, what they're used for, what data they process, and what the user-facing disclosures look like. The Act doesn't require public publication of this, but EU regulators may request it.
- Keep records of your assessment that you're in the limited-risk tier (not high-risk). If your use case is in or adjacent to Annex III, document why your specific implementation falls outside.
Total effort for a typical AI-touched SaaS: 1–3 days of work, mostly UX/copy and documentation. Don't confuse this with high-risk obligations — those are an order of magnitude more work and require formal registration.
Bottom line
Most SaaS founders will land in the limited-risk tier of the AI Act, which means manageable transparency obligations — AI disclosures, content provenance markers, deepfake labeling. The August 2026 deadline gives you time to ship these before they're enforceable. The bigger risk is misclassifying yourself: founders building employment, education, or credit-scoring AI tools who assume they're in the limited-risk tier when they're actually in high-risk territory. If your use case is anywhere near Annex III, get a definitive analysis from a privacy/regulatory attorney early — high-risk classification has consequences that meaningfully shape what you can ship in the EU. None of this is legal advice — for a jurisdictional analysis specific to your product, talk to an EU regulatory attorney.
Common questions.
Does the AI Act apply if my company is based outside the EU?
Yes, on the same extraterritorial model as GDPR. The AI Act applies to providers and deployers whose AI systems are placed on the EU market or whose output is used in the EU. A US-based SaaS with EU users is in scope.
Do I have to register my AI system in the EU AI database?
Only high-risk systems require registration. Limited-risk systems do not — you just need the transparency disclosures. If you're not sure which tier your system falls in, that's the question to answer first.
What's C2PA Content Credentials and is it mandatory?
C2PA (Coalition for Content Provenance and Authenticity) is an open standard for machine-readable content provenance — tags embedded in image/video/audio metadata that identify the creating tool, edits, and verification chain. The AI Act doesn't mandate C2PA specifically, but it's the most widely-adopted standard for the "machine-readable" provenance requirement, and adopting it now reduces compliance risk.
If I use OpenAI's API, am I a provider or deployer?
Deployer. OpenAI is the provider of the underlying GPAI model. Your obligations as a deployer are far lighter than provider obligations — primarily the Article 50 transparency requirements relevant to your application.