S3 Bucket Security: The Complete Guide to Preventing Data Exposure
Everything you need to know about securing S3 buckets — Block Public Access, bucket policies vs ACLs, pre-signed URLs, versioning, Object Lock, access logging, and lessons from real breaches.
S3 is involved in more cloud data breaches than any other service. Not because it's fundamentally insecure — AWS provides excellent security controls — but because its default permissive posture, combined with the complexity of bucket policies, ACLs, and public access settings, creates ample opportunity for misconfiguration. This guide covers every layer of S3 security from the account level down to individual objects.
Understanding the S3 Permission Model
S3 has a layered permission model that confuses even experienced AWS engineers. The effective permission for any request is the combination of:
- Account-level Block Public Access settings (can override everything else)
- Bucket policy (resource-based policy attached to the bucket)
- ACL (legacy access control list on the bucket or object)
- IAM policies (identity-based policies on the requesting principal)
- VPC endpoint policies (if traffic routes through a VPC endpoint)
- Object ownership settings (who controls object ACLs)
The key principle: a public access setting or bucket policy that grants access to * can make objects public even if the IAM user making the request has no explicit permission. S3 evaluates the resource-side and identity-side separately and grants access if either permits it (except when explicit denies are present).
Block Public Access: Your First Line of Defense
AWS introduced Block Public Access (BPA) in 2018 after years of high-profile S3 breaches. It operates at two levels — account and bucket — and four settings:
| Setting | Effect |
|---|---|
BlockPublicAcls | Ignores PUT requests that include public ACLs |
IgnorePublicAcls | Ignores existing public ACLs during access evaluation |
BlockPublicPolicy | Rejects bucket policies that grant public access |
RestrictPublicBuckets | Restricts access to buckets with public policies to only AWS services and authorized principals |
Enable all four at the account level:
aws s3control put-public-access-block \
--account-id $(aws sts get-caller-identity --query Account --output text) \
--public-access-block-configuration \
BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
Verify each bucket's individual settings are not overriding the account setting:
aws s3api list-buckets --query 'Buckets[].Name' --output text | \
tr '\t' '\n' | \
while read bucket; do
echo -n "$bucket: "
aws s3api get-public-access-block --bucket "$bucket" 2>/dev/null || echo "No bucket-level BPA (inherits account)"
done
Bucket Policies vs. ACLs
Bucket Policies (Recommended)
Bucket policies are JSON IAM-style policies attached to the bucket. They offer fine-grained control and are human-readable:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyNonSecureTransport",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
},
{
"Sid": "AllowAppRole",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/my-app-role"
},
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
The DenyNonSecureTransport statement above is critical — it ensures all S3 traffic uses HTTPS, preventing man-in-the-middle attacks on S3 API calls.
ACLs (Legacy — Disable Them)
ACLs are a legacy access control mechanism predating IAM. They're harder to audit, limited in expressiveness, and frequently misconfigured. AWS recommends disabling ACLs by setting Object Ownership to "Bucket owner enforced":
aws s3api put-bucket-ownership-controls \
--bucket my-bucket \
--ownership-controls '{"Rules":[{"ObjectOwnership":"BucketOwnerEnforced"}]}'
With BucketOwnerEnforced, all ACLs are disabled and ignored. The bucket owner automatically owns all objects. Use bucket policies for access control instead.
Real Breach Case Studies
The Capital One Breach (2019)
A misconfigured Web Application Firewall allowed an attacker to perform Server-Side Request Forgery (SSRF) against the EC2 Instance Metadata Service. The EC2 instance had an overly permissive IAM role with broad S3 read permissions. The attacker used the instance's credentials to read over 100 million customer records from S3. Key lesson: IMDS v2 (which requires a session token) would have blocked the SSRF exploitation. IAM least-privilege would have contained the blast radius.
The GoDaddy/SolarWinds Pattern
Multiple breaches have involved S3 buckets used as staging areas for build artifacts, logs, or backups that were accidentally made public. These "support" buckets often receive less security scrutiny than production data buckets but contain equally sensitive information (configs, credentials, employee data).
Audit all buckets, not just the ones you know are important:
# Find buckets with public access configurations
aws s3api list-buckets --query 'Buckets[].Name' --output text | \
tr '\t' '\n' | \
while read bucket; do
result=$(aws s3api get-bucket-policy-status --bucket "$bucket" 2>/dev/null)
if echo "$result" | grep -q '"IsPublic": true'; then
echo "PUBLIC: $bucket"
fi
done
Pre-Signed URLs: Secure Temporary Access
Pre-signed URLs provide time-limited access to specific objects without requiring the requester to have AWS credentials. This is the correct pattern for user-facing file downloads:
import boto3
from botocore.config import Config
s3_client = boto3.client(
's3',
config=Config(signature_version='s3v4')
)
# Generate a pre-signed URL valid for 15 minutes
url = s3_client.generate_presigned_url(
'get_object',
Params={
'Bucket': 'my-private-bucket',
'Key': 'user-uploads/document.pdf',
'ResponseContentDisposition': 'attachment; filename="document.pdf"'
},
ExpiresIn=900 # 15 minutes
)
Important considerations for pre-signed URLs:
- Use short expiration times (15 minutes for downloads, 5 minutes for uploads)
- The URL is signed using the credentials of the IAM entity that generated it. If that entity is revoked, the URL becomes invalid immediately.
- Pre-signed POST URLs (for uploads) allow you to enforce conditions like maximum file size and content type
Versioning and MFA Delete
Versioning protects against accidental or malicious deletion and overwrites. Enable it on all buckets containing critical data:
aws s3api put-bucket-versioning \
--bucket my-important-bucket \
--versioning-configuration Status=Enabled
With versioning enabled, "deletes" create a delete marker rather than actually removing the object. Previous versions remain accessible and can be restored.
MFA Delete adds a second factor requirement for permanently deleting versions or changing the versioning state:
aws s3api put-bucket-versioning \
--bucket my-important-bucket \
--versioning-configuration 'MFADelete=Enabled,Status=Enabled' \
--mfa "arn:aws:iam::123456789012:mfa/admin-mfa-device 123456"
Note: MFA Delete can only be enabled/disabled by the root account, and requires passing the MFA serial number and TOTP code together.
S3 Object Lock for WORM Storage
Object Lock enforces WORM (Write Once Read Many) storage — objects cannot be deleted or overwritten for a specified duration. Essential for compliance (FINRA, CFTC, SEC 17a-4) and ransomware protection:
# Enable Object Lock at bucket creation (cannot be added later)
aws s3api create-bucket \
--bucket compliance-records \
--region us-east-1 \
--object-lock-enabled-for-bucket
# Set default retention policy
aws s3api put-object-lock-configuration \
--bucket compliance-records \
--object-lock-configuration '{
"ObjectLockEnabled": "Enabled",
"Rule": {
"DefaultRetention": {
"Mode": "COMPLIANCE",
"Years": 7
}
}
}'
Two retention modes exist:
- GOVERNANCE mode: Can be bypassed by users with
s3:BypassGovernanceRetentionpermission. Use for development/testing. - COMPLIANCE mode: Cannot be bypassed by any user, including root. Use for regulatory compliance.
Legal holds (s3:PutObjectLegalHold) are separate from retention periods and can be applied/removed by users with appropriate permissions to preserve objects during litigation.
Server Access Logging
Access logs record every request made to a bucket — requestor, IP, operation, response code, and bytes transferred. Enable them for all buckets, sending logs to a dedicated logging bucket in a separate account:
# Create logging bucket with no public access
aws s3api create-bucket --bucket my-access-logs --region us-east-1
# Enable logging on source bucket
aws s3api put-bucket-logging \
--bucket my-important-bucket \
--bucket-logging-status '{
"LoggingEnabled": {
"TargetBucket": "my-access-logs",
"TargetPrefix": "my-important-bucket/"
}
}'
For higher-fidelity logging (with request/response headers and object-level CloudTrail events), also enable CloudTrail data events for S3:
aws cloudtrail put-event-selectors \
--trail-name my-trail \
--event-selectors '[{
"ReadWriteType": "All",
"DataResources": [{
"Type": "AWS::S3::Object",
"Values": ["arn:aws:s3:::my-important-bucket/"]
}]
}]'
Encryption
Server-Side Encryption
Enforce SSE-KMS with customer-managed keys on all sensitive buckets:
aws s3api put-bucket-encryption \
--bucket my-bucket \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "arn:aws:kms:us-east-1:123456789012:key/my-key-id"
},
"BucketKeyEnabled": true
}]
}'
BucketKeyEnabled: true reduces KMS API calls by 99% by generating a short-lived bucket-level data key, significantly reducing costs for high-throughput buckets.
Deny Uploads Without Encryption
Block unencrypted PUT requests:
{
"Statement": [{
"Sid": "DenyUnencryptedUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
}]
}
S3 Security at Scale
For large organizations with hundreds of buckets, manual auditing doesn't scale. Use AWS Config rules to continuously evaluate bucket configurations:
# Check for bucket public access
aws configservice put-config-rule --config-rule '{
"ConfigRuleName": "s3-bucket-public-access-prohibited",
"Source": {"Owner": "AWS", "SourceIdentifier": "S3_BUCKET_PUBLIC_READ_PROHIBITED"}
}'
# Check for server-side encryption
aws configservice put-config-rule --config-rule '{
"ConfigRuleName": "s3-bucket-server-side-encryption-enabled",
"Source": {"Owner": "AWS", "SourceIdentifier": "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED"}
}'
Amazon Macie provides automated sensitive data discovery across all buckets, using ML to identify PII, financial data, and credentials. Run Macie as a continuous background job rather than one-time scans.
The layered approach to S3 security — account-level Block Public Access, bucket policies, encryption, versioning, logging, and continuous monitoring — means that a misconfiguration at one layer is caught by another. No single setting or tool is sufficient; security comes from the combination.