Cloud Security

Cloud Audit Logging: CloudTrail, GCP Audit Logs, and Azure Activity Logs

A complete guide to cloud audit logging — what to log, which events to alert on, how to centralize logs, protect log integrity, and set appropriate retention policies across AWS, GCP, and Azure.

December 15, 20258 min readShipSafer Team

Cloud audit logs are your primary source of truth for answering "what happened?" during a security incident. They record who made API calls, from where, at what time, and with what result. Without them, incident investigation becomes guesswork. With them, you can reconstruct an entire attack chain from initial access through data exfiltration.

This guide covers the three major cloud audit logging systems in depth, with particular attention to configuration mistakes that leave blind spots attackers can exploit.

AWS CloudTrail

What CloudTrail Records

CloudTrail captures three categories of events:

Management Events (control plane)

  • Creating, modifying, or deleting IAM resources
  • Launching or terminating EC2 instances
  • Modifying security groups, VPCs, or route tables
  • CloudTrail configuration changes themselves

Data Events (data plane)

  • S3 object-level API operations (GetObject, PutObject, DeleteObject)
  • DynamoDB item-level operations
  • Lambda function invocations
  • Secrets Manager secret value reads

CloudTrail Insights Events

  • Anomalous API call rates (e.g., 10x normal RunInstances calls)
  • Unusual error rates suggesting brute force or scanning

Common CloudTrail Configuration Mistakes

Mistake 1: Single-region trail

A single-region trail misses activity in all other regions. Attackers frequently pivot to regions with low monitoring to avoid detection:

# Wrong: single-region trail
aws cloudtrail create-trail \
  --name my-trail \
  --s3-bucket-name cloudtrail-logs

# Right: multi-region trail
aws cloudtrail create-trail \
  --name my-trail \
  --s3-bucket-name cloudtrail-logs \
  --is-multi-region-trail \
  --include-global-service-events \
  --enable-log-file-validation

Mistake 2: Not enabling data events

Management events alone miss S3 object reads and Lambda invocations — two critical data exfiltration vectors:

aws cloudtrail put-event-selectors \
  --trail-name my-trail \
  --event-selectors '[
    {
      "ReadWriteType": "All",
      "IncludeManagementEvents": true,
      "DataResources": [
        {
          "Type": "AWS::S3::Object",
          "Values": ["arn:aws:s3:::"]
        },
        {
          "Type": "AWS::Lambda::Function",
          "Values": ["arn:aws:lambda"]
        },
        {
          "Type": "AWS::DynamoDB::Table",
          "Values": ["arn:aws:dynamodb"]
        }
      ]
    }
  ]'

Mistake 3: No log file validation

Without log file validation, tampering with CloudTrail logs is undetectable. Enable it:

aws cloudtrail update-trail \
  --name my-trail \
  --enable-log-file-validation

To validate logs after the fact:

aws cloudtrail validate-logs \
  --trail-arn arn:aws:cloudtrail:us-east-1:123456789012:trail/my-trail \
  --start-time 2025-01-01 \
  --end-time 2025-01-31

Critical CloudTrail Events to Alert On

Not every CloudTrail event warrants an alert. Focus on high-signal events that indicate either misconfiguration or active attack:

Identity and Access

ConsoleLogin (with mfaUsed=No or from unusual IP)
CreateUser
CreateAccessKey
AttachUserPolicy / AttachRolePolicy (wildcards in policy)
UpdateAssumeRolePolicy
PutUserPolicy / PutRolePolicy
CreatePolicyVersion (SetAsDefault=true)
DeleteTrail / StopLogging / UpdateTrail

Network and Compute

AuthorizeSecurityGroupIngress (port 22/3389 from 0.0.0.0/0)
CreateVpcPeeringConnection (cross-account)
ModifyInstanceAttribute (userData changes to running instances)
RunInstances (large/GPU instances, unusual regions)

Data Access

GetSecretValue (from unusual principals or IPs)
DeleteBucket / DeleteObject (bulk operations)
PutBucketPolicy (granting public access)
GetObject on sensitive buckets from unusual principals

CloudWatch Logs Insights query for high-risk events:

fields @timestamp, userIdentity.arn, eventName, awsRegion, sourceIPAddress, errorCode
| filter eventName in [
    "DeleteTrail", "StopLogging", "PutBucketPublicAccessBlock",
    "CreateUser", "AttachUserPolicy", "CreateAccessKey",
    "ConsoleLogin", "GetSecretValue"
  ]
| filter errorCode != "AccessDenied"
| sort @timestamp desc

Protecting CloudTrail Logs from Tampering

If an attacker can delete your CloudTrail logs, they can erase evidence of their activity. Protect logs with:

1. S3 Object Lock (WORM storage):

aws s3api put-object-lock-configuration \
  --bucket cloudtrail-logs-secure \
  --object-lock-configuration '{
    "ObjectLockEnabled": "Enabled",
    "Rule": {
      "DefaultRetention": {
        "Mode": "GOVERNANCE",
        "Days": 365
      }
    }
  }'

2. Separate log archive account: Store CloudTrail logs in a dedicated security account. Workload account operators cannot delete logs they cannot access.

3. SCP to prevent disabling CloudTrail:

{
  "Effect": "Deny",
  "Action": [
    "cloudtrail:DeleteTrail",
    "cloudtrail:StopLogging",
    "cloudtrail:UpdateTrail",
    "cloudtrail:PutEventSelectors"
  ],
  "Resource": "*",
  "Condition": {
    "ArnNotLike": {
      "aws:PrincipalARN": "arn:aws:iam::*:role/SecurityAdminRole"
    }
  }
}

GCP Cloud Audit Logs

Three Types of Audit Logs

GCP's audit logging is split into three types with different characteristics:

Admin Activity Audit Logs

  • Always enabled, cannot be disabled
  • Records resource configuration changes (creating VMs, modifying IAM, etc.)
  • No charge
  • Retained 400 days

Data Access Audit Logs

  • Must be explicitly enabled
  • Records reads and writes to user data (reading objects from Cloud Storage, querying BigQuery)
  • Charges apply (approximately $0.50/GiB beyond free tier)
  • Retained 30 days by default

System Event Audit Logs

  • GCP-initiated changes (live migration, auto-scaling)
  • Always enabled, no charge
  • Retained 400 days

Enabling Data Access Audit Logs

The most important and most commonly missed configuration:

# Enable data access logs for all services
gcloud projects set-iam-policy my-project - <<EOF
{
  "auditConfigs": [
    {
      "service": "allServices",
      "auditLogConfigs": [
        {"logType": "ADMIN_READ"},
        {"logType": "DATA_READ"},
        {"logType": "DATA_WRITE"}
      ]
    }
  ],
  "bindings": [...existing bindings...]
}
EOF

For cost management, enable data access logs selectively for high-sensitivity services:

{
  "auditConfigs": [
    {
      "service": "storage.googleapis.com",
      "auditLogConfigs": [{"logType": "DATA_READ"}, {"logType": "DATA_WRITE"}]
    },
    {
      "service": "secretmanager.googleapis.com",
      "auditLogConfigs": [{"logType": "DATA_READ"}, {"logType": "DATA_WRITE"}]
    },
    {
      "service": "bigquery.googleapis.com",
      "auditLogConfigs": [{"logType": "DATA_READ"}, {"logType": "DATA_WRITE"}]
    }
  ]
}

Log Sinks for Centralization and Retention

Route logs to BigQuery for long-term retention and analysis, or to Pub/Sub for real-time SIEM integration:

# Create BigQuery dataset for audit logs
bq mk --dataset --location=US audit_logs

# Create log sink to BigQuery
gcloud logging sinks create audit-to-bigquery \
  bigquery.googleapis.com/projects/my-project/datasets/audit_logs \
  --log-filter='logName=~"cloudaudit.googleapis.com"' \
  --use-partitioned-tables

# Create organization-level sink to aggregate all project logs
gcloud logging sinks create org-audit-aggregator \
  bigquery.googleapis.com/projects/security-project/datasets/org_audit_logs \
  --organization=123456789 \
  --include-children \
  --log-filter='logName=~"cloudaudit.googleapis.com"'

Critical GCP Log Events

# IAM changes
protoPayload.methodName: "SetIamPolicy"
protoPayload.methodName: "google.iam.admin.v1.CreateServiceAccountKey"

# Org policy changes
protoPayload.methodName: "SetOrgPolicy"
protoPayload.methodName: "DeleteOrgPolicy"

# Compute modifications
protoPayload.methodName: "v1.compute.instances.insert"
protoPayload.methodName: "beta.compute.firewalls.insert"

# Sensitive data access
protoPayload.methodName: "google.cloud.secretmanager.v1.SecretManagerService.AccessSecretVersion"
protoPayload.methodName: "storage.objects.get"
protoPayload.resourceName: "projects/*/buckets/sensitive-bucket/*"

Azure Activity Logs and Diagnostic Settings

Azure Activity Log vs. Resource Logs

Activity Log (control plane)

  • Subscription-level operations: creating resources, modifying IAM assignments, policy changes
  • Enabled by default, retained 90 days
  • Available in Azure Monitor without configuration

Resource Diagnostic Logs (data plane and detailed control plane)

  • Per-resource logs: Key Vault access, NSG flow logs, Application Gateway WAF events
  • Must be explicitly enabled via Diagnostic Settings
  • Cost varies by log volume

Microsoft Entra ID Logs

  • Sign-in events, audit events (user creation, role assignment, MFA changes)
  • Requires Entra ID P1/P2 for full retention and Log Analytics integration

Configuring Diagnostic Settings at Scale

Use Azure Policy to enforce Diagnostic Settings across all resources:

# Deploy Diagnostic Settings policy initiative
az policy assignment create \
  --name "enforce-diagnostic-settings" \
  --display-name "Enforce Diagnostic Settings - All Resources" \
  --policy-set-definition "/providers/Microsoft.Authorization/policySetDefinitions/0884adba-2312-4468-abeb-5422caed1038" \
  --scope "/subscriptions/my-subscription-id" \
  --params '{"logAnalyticsWorkspace": {"value": "/subscriptions/.../workspaces/security-workspace"}}'

Manually configure Key Vault diagnostic settings (highest priority):

az monitor diagnostic-settings create \
  --name keyvault-full-logs \
  --resource /subscriptions/.../vaults/production-keyvault \
  --workspace /subscriptions/.../workspaces/security-workspace \
  --logs '[
    {"category": "AuditEvent", "enabled": true, "retentionPolicy": {"enabled": true, "days": 365}},
    {"category": "AzurePolicyEvaluationDetails", "enabled": true}
  ]' \
  --metrics '[
    {"category": "AllMetrics", "enabled": true}
  ]'

Critical Azure Log Queries (KQL)

Detect privilege escalation:

AuditLogs
| where OperationName contains "Add member to role"
| where TargetResources[0].modifiedProperties
    has "Global Administrator" or
    TargetResources[0].modifiedProperties has "Privileged Role Administrator"
| project TimeGenerated, InitiatedBy, TargetResources, Result
| order by TimeGenerated desc

Detect Key Vault access from unusual locations:

AzureDiagnostics
| where ResourceType == "VAULTS"
| where OperationName == "SecretGet"
| where ResultType == "Success"
| summarize count() by CallerIPAddress, identity_claim_oid_g
| where count_ > 10

Detect bulk resource deletion:

AzureActivity
| where OperationNameValue endswith "delete"
| where ActivityStatusValue == "Succeeded"
| summarize count() by Caller, bin(TimeGenerated, 1h)
| where count_ > 20

Log Centralization Architecture

For multi-cloud environments, centralizing logs in a dedicated SIEM provides a unified view:

AWS CloudTrail ──────────────────────────┐
GCP Audit Logs (Pub/Sub export) ─────────┤──→ SIEM / Log Platform
Azure Activity Logs (Event Hub export) ──┘    (Splunk, Elastic,
Application Logs (fluentd/fluent-bit) ───┘     Datadog, Chronicle)

Key requirements for the log pipeline:

  • Immutability: Logs should be forwarded to the SIEM but also retained in immutable storage (S3 Object Lock, GCS with retention policies, Azure Immutable Blob Storage)
  • Integrity: CloudTrail log file validation, GCP log entry hash chains, Azure Log Analytics tamper protection
  • Latency: Critical security events should arrive in the SIEM within 5 minutes
  • Completeness: Monitor for gaps in log delivery using log source health metrics

Retention Policies

Retention requirements vary by compliance framework:

FrameworkMinimum RetentionRecommended
SOC 2No specific requirement1 year
PCI DSS12 months (3 months immediately available)13 months
HIPAA6 years7 years
GDPRDefined by purposeAs short as possible
FedRAMP3 years3+ years
SEC Rule 17a-46 years7 years

Configure tiered retention: recent logs (0-90 days) in hot storage for fast querying, older logs in cold/archive storage:

# AWS: Move CloudTrail logs to Glacier after 90 days
aws s3api put-bucket-lifecycle-configuration \
  --bucket cloudtrail-logs \
  --lifecycle-configuration '{
    "Rules": [{
      "Status": "Enabled",
      "Transitions": [
        {"Days": 90, "StorageClass": "STANDARD_IA"},
        {"Days": 365, "StorageClass": "GLACIER"}
      ],
      "Expiration": {"Days": 2557}
    }]
  }'

Audit logging gaps are one of the most commonly cited issues in security assessments and compliance audits. The configuration is straightforward, but it requires intentional setup — logs don't flow automatically. Treat logging configuration as infrastructure code, version control it, and validate completeness quarterly.

audit logging
CloudTrail
GCP audit logs
Azure Activity Logs
SIEM
compliance
incident response

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.