AI Security

EU AI Act Compliance for SaaS Companies: What You Need to Do Now

A practical guide to the EU AI Act's risk categories, compliance obligations for SaaS companies, transparency requirements, and timeline for enforcement starting in 2025.

November 3, 20259 min readShipSafer Team

The EU AI Act entered into force on August 1, 2024, with a phased implementation schedule running through 2027. For SaaS companies building AI-powered products and serving EU customers, this regulation imposes concrete technical and organizational obligations that go significantly beyond existing GDPR requirements.

This guide covers the risk classification framework, obligations by tier, what "high-risk" means in practice for SaaS use cases, transparency requirements, and a practical compliance roadmap.

The Risk Classification Framework

The EU AI Act categorizes AI systems into four risk levels, each with different compliance obligations.

Unacceptable Risk (Prohibited Systems)

Certain AI applications are prohibited entirely, effective February 2025:

  • Social scoring systems: AI that assigns trustworthiness scores to individuals based on social behavior and uses them for differential treatment in unrelated contexts
  • Real-time biometric surveillance: Remote real-time biometric identification systems in public spaces for law enforcement (with narrow exceptions)
  • Subliminal manipulation: AI that exploits psychological vulnerabilities to influence behavior in ways users cannot recognize or resist
  • Emotion recognition in workplaces and education: Inferring employees' or students' emotional states for decision-making purposes
  • Predictive policing: AI that predicts individuals' likelihood of committing crimes based on profiling

Most SaaS companies will not build systems in this category. If any of your features come close to these descriptions, consult legal counsel immediately.

High-Risk AI Systems

This is where most compliance complexity lies for SaaS companies. High-risk systems are categorized in Annexes II and III of the Act:

Annex III categories relevant to SaaS:

  • Employment and worker management: AI used to recruit, select, promote, evaluate performance, manage employment contracts, or monitor workers
  • Credit and financial services: AI for creditworthiness assessment, credit scoring
  • Education: AI for student assessment, admissions decisions, monitoring during tests
  • Access to essential services: AI for benefits determination, prioritization of emergency services
  • Critical infrastructure: AI used in digital infrastructure (water, energy, transport, cybersecurity)
  • Law enforcement: (primarily government use)

Practical SaaS examples that may qualify as high-risk:

  • Resume screening or candidate ranking tools → Employment HR management (Annex III.4)
  • Automated credit risk scoring APIs → Financial services (Annex III.5b)
  • Student performance analysis for grading → Education (Annex III.3)
  • Fraud detection systems that automatically deny service → Access to services (Annex III.5)

If you are unsure whether your system qualifies as high-risk, the Act includes a self-assessment guidance document, and the European AI Office provides additional classification guidance.

Limited Risk (Transparency Obligations)

Most general-purpose AI features fall here. Limited-risk systems require specific transparency measures but no conformity assessment:

  • Chatbots: Users must be informed they are interacting with an AI system
  • Deepfakes: AI-generated content representing real people must be disclosed as synthetic
  • Emotion recognition: Must inform users when their emotions are being inferred
  • AI-generated content: Systems generating synthetic images, audio, or video must disclose this

Minimal Risk

Standard ML-based features like spam filters, recommendation engines, and AI-assisted search without significant decision-making impact fall here. No specific obligations beyond general AI Act principles.

Obligations for High-Risk AI Systems

If your product is classified as high-risk, you face extensive obligations. These apply whether you are the AI system provider (you built it) or the deployer (you use someone else's high-risk AI in your product).

1. Risk Management System

You must establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle:

## Required Risk Management Documentation

### System Description
- Purpose and intended use
- Categories of individuals affected
- Geographic scope of deployment

### Risk Assessment
- Reasonably foreseeable misuse scenarios
- Risks to fundamental rights
- Risks from use by vulnerable groups (children, elderly, people with disabilities)
- Systemic risks at scale

### Risk Mitigation Measures
- Technical controls implemented
- Human oversight mechanisms
- Testing procedures

### Residual Risks
- Accepted residual risks after mitigation
- Justification for acceptance

2. Training Data Governance

High-risk systems require documented data governance practices:

  • Data sources and collection methodology
  • Data preparation operations (cleaning, augmentation, annotation)
  • Examination for biases and how they were addressed
  • Data relevance, representativeness, and completeness assessment
class TrainingDataGovernance:
    """Track and document training data provenance for EU AI Act compliance."""

    def record_dataset(
        self,
        dataset_id: str,
        source: str,
        collection_date: str,
        data_categories: list[str],  # e.g., ["employment history", "performance reviews"]
        geographic_origin: list[str],
        preprocessing_steps: list[str],
        bias_assessment: dict,
        consent_basis: str,
    ) -> None:
        self.documentation_store.record({
            "dataset_id": dataset_id,
            "source": source,
            "collection_date": collection_date,
            "data_categories": data_categories,
            "geographic_origin": geographic_origin,
            "preprocessing_steps": preprocessing_steps,
            "bias_assessment": bias_assessment,
            "consent_basis": consent_basis,
            "documented_at": datetime.utcnow().isoformat(),
        })

3. Technical Documentation (Article 11)

You must maintain technical documentation that allows authorities to assess compliance. This must include:

  • General description: intended purpose, developer identity, version
  • Detailed description of elements and development process
  • Monitoring, functioning, and control information
  • Validation and testing procedures and results
  • Risk management documentation
  • Changes made during the system's lifecycle

4. Logging and Record-Keeping (Article 12)

High-risk AI systems must automatically log:

class HighRiskAIAuditLogger:
    """Compliant audit logging for EU AI Act high-risk systems."""

    def log_decision(
        self,
        decision_id: str,
        system_id: str,
        input_data_hash: str,   # Hash, not raw data (privacy)
        output: dict,
        confidence_score: float,
        human_reviewed: bool,
        reviewer_id: str | None,
        final_decision: str,
        timestamp: datetime,
    ) -> None:
        """
        Article 12 requires logging to enable post-hoc investigation.
        Retain for at least 6 months after each decision.
        """
        record = {
            "decision_id": decision_id,
            "system_id": system_id,
            "input_data_hash": input_data_hash,
            "ai_output": output,
            "ai_confidence": confidence_score,
            "human_reviewed": human_reviewed,
            "reviewer_id": reviewer_id,
            "final_decision": final_decision,
            "timestamp": timestamp.isoformat(),
        }
        self.append_immutable(record)

    def append_immutable(self, record: dict) -> None:
        """Log records must not be modifiable after the fact."""
        # Use WORM storage (S3 Object Lock, Azure Immutable Storage)
        # or append-only logging to tamper-evident store
        pass

5. Human Oversight (Article 14)

High-risk systems must be designed to allow humans to override, stop, or be alerted to issues:

  • Clear indication when the AI's confidence is below a threshold
  • Ability for human reviewers to disregard AI recommendations
  • Documented escalation procedures when AI recommendations are flagged
  • Human review required for consequential decisions
class HumanOversightGate:
    HUMAN_REVIEW_THRESHOLDS = {
        "employment_decision": 0.95,  # Only auto-approve if >95% confidence
        "credit_assessment": 0.90,
        "student_grade": 0.85,
    }

    def evaluate(
        self,
        decision_type: str,
        ai_decision: dict,
        confidence: float,
    ) -> dict:
        threshold = self.HUMAN_REVIEW_THRESHOLDS.get(decision_type, 0.90)

        if confidence >= threshold and ai_decision.get("risk_flags", []) == []:
            return {
                "require_human_review": False,
                "auto_approved": True,
                "confidence": confidence,
            }
        else:
            return {
                "require_human_review": True,
                "auto_approved": False,
                "reason": (
                    f"Confidence {confidence:.1%} below threshold {threshold:.1%}"
                    if confidence < threshold
                    else f"Risk flags present: {ai_decision.get('risk_flags')}"
                ),
            }

6. Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk systems must achieve appropriate levels of accuracy and must be resilient to adversarial inputs, data poisoning, and model-level attacks. This requires:

  • Regular accuracy benchmarking against diverse test sets
  • Adversarial robustness testing
  • Documented performance across different demographic groups (to detect disparate impact)

Transparency Obligations for Limited-Risk Systems

For chatbots and AI-generated content (which covers most SaaS AI features), the requirements are more straightforward but binding.

Chatbot Disclosure (Article 50)

Any system that directly interacts with users through natural language must disclose that it is an AI:

// Required disclosure in your UI
const ChatInterface = () => {
  return (
    <div>
      <div className="ai-disclosure-banner">
        <p>
          This conversation is powered by an AI assistant.
          You are not speaking with a human.
          <a href="/ai-information">Learn more about how we use AI.</a>
        </p>
      </div>
      <ChatWindow />
    </div>
  );
};

The disclosure must be:

  • At the beginning of the interaction, not buried in settings
  • Clear and unambiguous
  • Not undermined by design choices that suggest human interaction

Synthetic Content Labeling (Article 50.4)

AI-generated images, audio, and video must be labeled as AI-generated. This includes:

from PIL import Image
import piexif
import json

def add_ai_generation_metadata(image_path: str, generation_params: dict) -> str:
    """Add C2PA-compliant AI generation metadata to image."""
    img = Image.open(image_path)

    # Add to EXIF metadata
    exif_dict = {"0th": {}, "Exif": {}}
    user_comment = json.dumps({
        "ai_generated": True,
        "generator": "your-product-name",
        "generation_timestamp": datetime.utcnow().isoformat(),
        "eu_ai_act_disclosure": True,
    })
    exif_dict["Exif"][piexif.ExifIFD.UserComment] = user_comment.encode()

    output_path = image_path.replace(".", "_labeled.")
    exif_bytes = piexif.dump(exif_dict)
    img.save(output_path, exif=exif_bytes)
    return output_path

For visible AI-generated content in your product UI, add clear visual labeling: "AI-generated" watermarks, disclosure labels, or metadata badges.

GPAI Model Obligations (for AI Model Providers)

If you train and release general-purpose AI models (not just integrate existing ones), Articles 51-55 apply:

  • Maintain and publish technical documentation
  • Provide information to downstream deployers
  • Comply with EU copyright law regarding training data
  • For high-capability GPAI models: adversarial testing, incident reporting to the EU AI Office, cybersecurity measures

This primarily affects companies like OpenAI, Anthropic, Google, and Mistral. SaaS companies using their APIs have downstream deployer obligations, not GPAI provider obligations.

Implementation Timeline

DateObligation
February 2, 2025Prohibited AI systems ban effective
August 2, 2025GPAI model obligations effective; AI literacy training required for staff
August 2, 2026High-risk system obligations effective (Annex III); Transparency obligations effective
August 2, 2027High-risk systems in Annex II (regulated products) obligations effective

For most SaaS companies, the critical near-term deadline is August 2, 2026 for high-risk systems and transparency obligations.

Practical Compliance Roadmap

Now (2025)

  1. Inventory your AI systems: Catalog every AI feature in your product
  2. Risk classification: Determine whether any systems are high-risk per Annex III
  3. Staff AI literacy: The Act requires ensuring staff have sufficient AI knowledge to operate AI systems (Article 4)
  4. Review chatbot UX: Ensure AI disclosure is clear and prominent
  5. Establish governance: Assign AI compliance ownership

2025-2026

  1. Technical documentation: Draft and maintain documentation for all AI systems
  2. Data governance: Implement training data provenance tracking
  3. Logging infrastructure: Build Article 12-compliant audit logging for high-risk systems
  4. Human oversight: Design and implement review workflows
  5. Conformity assessment: For high-risk systems, complete the conformity assessment process

Penalties

Non-compliance penalties are substantial:

  • Prohibited practices violations: up to €35 million or 7% of global annual turnover
  • Other obligations: up to €15 million or 3% of global annual turnover
  • Incorrect information to authorities: up to €7.5 million or 1.5% of global annual turnover

The EU AI Act is not optional for companies with EU users, regardless of where the company is headquartered. The extraterritorial scope mirrors GDPR: if your AI system is used by people in the EU, the Act applies.

Start your compliance assessment now, not when enforcement begins. The technical documentation, data governance, and logging requirements require significant infrastructure work that cannot be completed last minute.

EU AI Act
AI compliance
GDPR
AI regulation
risk classification
AI governance

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.