Terraform Cloud Security: Remote State, Sentinel Policies, and Audit Logging
A comprehensive guide to securing Terraform Cloud and Terraform Enterprise — encrypted remote state, Sentinel policy-as-code enforcement, variable sets for secrets management, run environment isolation, and audit logging.
Terraform defines your cloud infrastructure, which means a security vulnerability in Terraform — compromised state files, weak secrets management, or absent policy guardrails — translates directly to compromised cloud infrastructure. Securing Terraform is as important as securing the cloud environments it manages.
Remote State Security
Why Local State Is Dangerous
Terraform state files contain sensitive information in plaintext:
// Example: What state files contain
{
"resources": [
{
"type": "aws_db_instance",
"instances": [{
"attributes": {
"username": "admin",
"password": "MyS3cr3tPassword!", // Plaintext!
"endpoint": "prod-db.cluster-xxxx.us-east-1.rds.amazonaws.com"
}
}]
},
{
"type": "aws_secretsmanager_secret_version",
"instances": [{
"attributes": {
"secret_string": "{\"api_key\":\"sk_live_xxxxxxxxxxxx\"}" // Plaintext!
}
}]
}
]
}
State files committed to version control have been a major source of credential leaks. Never use local state for any environment with real credentials.
Terraform Cloud Remote State
Terraform Cloud encrypts state at rest (AES-256) and in transit, with access controls per workspace. Configure remote state in your Terraform configuration:
# main.tf
terraform {
required_version = ">= 1.6.0"
backend "remote" {
organization = "my-organization"
workspaces {
name = "production-infrastructure"
}
}
}
Or using the native cloud backend (preferred for TFC):
terraform {
cloud {
organization = "my-organization"
workspaces {
tags = ["production", "aws"]
}
}
}
S3 Backend with Encryption (Self-Hosted Alternative)
If not using Terraform Cloud, use an S3 backend with all security controls enabled:
terraform {
backend "s3" {
bucket = "terraform-state-secure-bucket"
key = "production/infrastructure.tfstate"
region = "us-east-1"
encrypt = true
kms_key_id = "arn:aws:kms:us-east-1:123456789012:key/terraform-state-key"
dynamodb_table = "terraform-state-lock"
# Enforce secure transport
# Configure bucket policy separately to require HTTPS
}
}
The S3 bucket for state must have:
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-state-secure-bucket"
}
resource "aws_s3_bucket_versioning" "state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
kms_master_key_id = aws_kms_key.terraform_state.arn
}
bucket_key_enabled = true
}
}
resource "aws_s3_bucket_public_access_block" "state" {
bucket = aws_s3_bucket.terraform_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# Object Lock for state protection
resource "aws_s3_bucket_object_lock_configuration" "state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
default_retention {
mode = "GOVERNANCE"
days = 30
}
}
}
State Access Controls
Terraform Cloud workspaces have granular team permissions:
| Permission Level | What It Allows |
|---|---|
| Read state versions | View state, nothing else |
| Read variables | View variables (sensitive variables masked) |
| Queue plans | Trigger plans (no applies) |
| Apply runs | Trigger and approve applies |
| Lock and unlock workspace | Override stuck locks |
| Admin | All permissions + workspace settings |
Assign permissions based on the principle of least privilege:
- Developers: "Queue plans" — can propose changes but not apply them
- Senior engineers: "Apply runs" for staging, "Queue plans" for production
- Platform team: "Admin" for all workspaces
Sentinel: Policy as Code
Sentinel is HashiCorp's policy-as-code framework, built into Terraform Cloud and Terraform Enterprise. Sentinel policies evaluate Terraform plans before apply, blocking dangerous infrastructure changes.
Policy Sets Structure
sentinel-policies/
├── sentinel.hcl # Policy set configuration
├── common/
│ ├── tfplan-functions.sentinel
│ └── aws-functions.sentinel
├── security/
│ ├── no-public-s3.sentinel
│ ├── require-encryption.sentinel
│ ├── restrict-instance-types.sentinel
│ └── no-admin-iam-policies.sentinel
└── compliance/
├── require-tags.sentinel
└── restrict-regions.sentinel
Sentinel Policy Examples
Prevent public S3 buckets:
# security/no-public-s3.sentinel
import "tfplan/v2" as tfplan
# Get all S3 bucket public access block resources
s3_public_access_blocks = filter tfplan.resource_changes as _, rc {
rc.type is "aws_s3_bucket_public_access_block" and
(rc.change.actions contains "create" or rc.change.actions contains "update")
}
# Get all S3 buckets
s3_buckets = filter tfplan.resource_changes as _, rc {
rc.type is "aws_s3_bucket" and
(rc.change.actions contains "create" or rc.change.actions contains "update")
}
# Rule: Every S3 bucket must have a public access block
bucket_has_access_block = rule {
all s3_buckets as _, bucket {
any s3_public_access_blocks as _, block {
block.change.after.bucket is bucket.address
}
}
}
# Rule: All public access blocks must have all four settings enabled
access_blocks_all_enabled = rule {
all s3_public_access_blocks as _, block {
block.change.after.block_public_acls is true and
block.change.after.block_public_policy is true and
block.change.after.ignore_public_acls is true and
block.change.after.restrict_public_buckets is true
}
}
main = rule {
bucket_has_access_block and access_blocks_all_enabled
}
Require encryption on all storage resources:
# security/require-encryption.sentinel
import "tfplan/v2" as tfplan
# Resources that must be encrypted
encrypted_resource_types = [
"aws_s3_bucket_server_side_encryption_configuration",
"aws_ebs_volume",
"aws_rds_cluster",
"aws_rds_instance",
"aws_elasticache_replication_group",
]
ebs_volumes = filter tfplan.resource_changes as _, rc {
rc.type is "aws_ebs_volume" and
rc.change.actions contains "create"
}
rds_instances = filter tfplan.resource_changes as _, rc {
rc.type is "aws_db_instance" and
rc.change.actions contains "create"
}
# EBS: all new volumes must be encrypted
ebs_encrypted = rule {
all ebs_volumes as _, vol {
vol.change.after.encrypted is true
}
}
# RDS: all new instances must have storage encrypted
rds_encrypted = rule {
all rds_instances as _, db {
db.change.after.storage_encrypted is true
}
}
main = rule {
ebs_encrypted and rds_encrypted
}
Restrict expensive instance types:
# security/restrict-instance-types.sentinel
import "tfplan/v2" as tfplan
import "strings"
# Instance types allowed in production
allowed_instance_types = [
"t3.", "t3a.", "m5.", "m5a.", "c5.", "c5a.", "r5.", "r5a."
]
# Blocked expensive types (GPU, memory-optimized extremes)
blocked_prefixes = ["p3.", "p4.", "p5.", "g4.", "g5.", "x1.", "x2.", "u-"]
ec2_instances = filter tfplan.resource_changes as _, rc {
rc.type is "aws_instance" and
rc.change.actions contains "create"
}
no_expensive_instances = rule {
all ec2_instances as _, instance {
not any blocked_prefixes as prefix {
strings.has_prefix(instance.change.after.instance_type, prefix)
}
}
}
main = rule {
no_expensive_instances
}
Require mandatory tags:
# compliance/require-tags.sentinel
import "tfplan/v2" as tfplan
# Tags required on all taggable resources
required_tags = ["Environment", "Owner", "CostCenter", "DataClassification"]
# Resources that support tags
taggable_resources = filter tfplan.resource_changes as _, rc {
rc.change.actions contains "create" and
rc.type in [
"aws_instance", "aws_s3_bucket", "aws_rds_instance",
"aws_elasticache_cluster", "aws_lambda_function"
]
}
all_resources_tagged = rule {
all taggable_resources as _, resource {
all required_tags as tag {
resource.change.after.tags[tag] is not null and
resource.change.after.tags[tag] is not ""
}
}
}
main = rule {
all_resources_tagged
}
Policy Set Configuration
# sentinel.hcl
policy "no-public-s3" {
source = "./security/no-public-s3.sentinel"
enforcement_level = "hard-mandatory" # Cannot be overridden
}
policy "require-encryption" {
source = "./security/require-encryption.sentinel"
enforcement_level = "hard-mandatory"
}
policy "restrict-instance-types" {
source = "./security/restrict-instance-types.sentinel"
enforcement_level = "soft-mandatory" # Can be overridden with reason
}
policy "require-tags" {
source = "./compliance/require-tags.sentinel"
enforcement_level = "advisory" # Warning only
}
Three enforcement levels:
hard-mandatory: Run cannot proceed, no override possiblesoft-mandatory: Run blocked, but workspace admins can override with justificationadvisory: Warning in UI but doesn't block
Attach policy sets to workspaces via the Terraform Cloud UI or API.
Variable Sets for Secrets Management
The Problem with Workspace Variables
Storing cloud credentials directly in Terraform Cloud workspace variables creates several problems:
- Credentials must be manually rotated across dozens of workspaces
- Different workspaces may have different (potentially stale) credentials
- Auditing credential usage is difficult
Variable Sets: Centralized Credential Management
Variable sets allow sharing variables (including sensitive ones) across multiple workspaces:
# Create a variable set via API
curl --header "Authorization: Bearer $TFC_TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/organizations/my-org/varsets \
--data '{
"data": {
"type": "varsets",
"attributes": {
"name": "AWS Production Credentials",
"description": "AWS credentials for production workspaces",
"global": false
}
}
}'
# Add sensitive variable to the set
curl --header "Authorization: Bearer $TFC_TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
https://app.terraform.io/api/v2/varsets/$VARSET_ID/relationships/vars \
--data '{
"data": {
"type": "vars",
"attributes": {
"key": "AWS_ACCESS_KEY_ID",
"value": "AKIAIOSFODNN7EXAMPLE",
"category": "env",
"sensitive": true
}
}
}'
Sensitive variables are marked with "sensitive": true — they're never exposed in logs, UI, or API responses after creation.
Dynamic Credentials (Preferred Over Static Keys)
Terraform Cloud supports dynamic credentials using OIDC federation — no long-lived credentials stored anywhere:
# In Terraform Cloud workspace settings, configure dynamic provider credentials
# Then in your Terraform config:
provider "aws" {
region = "us-east-1"
# No credentials needed — TFC exchanges OIDC token for temporary credentials
}
Configure the trust relationship in AWS:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/app.terraform.io"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"app.terraform.io:aud": "aws.workload.identity"
},
"StringLike": {
"app.terraform.io:sub": "organization:my-org:project:*:workspace:production-*:run_phase:*"
}
}
}]
}
The sub condition scopes trust to specific organizations, projects, and workspace name patterns — preventing other Terraform Cloud organizations from assuming your roles.
Audit Logging
Terraform Cloud Audit Logs
Terraform Cloud Enterprise and Business plans include audit logging. Logs capture:
- Workspace runs (plan, apply, destroy)
- Variable changes (not values of sensitive variables)
- Team membership changes
- Policy set associations
- API token creation and revocation
Access audit logs via API:
curl --header "Authorization: Bearer $TFC_TOKEN" \
"https://app.terraform.io/api/v2/organizations/my-org/audit-trail" \
| jq '.data[] | {timestamp: .attributes.timestamp, type: .attributes.type, actor: .attributes.actor_display_name}'
Stream audit logs to your SIEM by polling the API on a schedule or using webhook notifications.
Monitoring Terraform Runs
For self-hosted Terraform Enterprise, configure a SIEM integration to alert on:
- Runs that apply changes without a plan review
- Applies to production workspaces by non-privileged users
- Workspace variable changes (especially credentials)
- Policy overrides (soft-mandatory policy bypassed)
- Team membership changes to production workspace teams
Checkov: Pre-Commit IaC Scanning
Before code even reaches Terraform Cloud, scan IaC with Checkov:
pip install checkov
# Scan Terraform directory
checkov -d ./infrastructure --framework terraform \
--check CKV_AWS_* \
--output sarif \
--output-file checkov-results.sarif
# Fail on high severity
checkov -d ./infrastructure \
--compact \
--soft-fail-on MEDIUM \
--hard-fail-on HIGH,CRITICAL
Integrate into a pre-commit hook:
# .pre-commit-config.yaml
repos:
- repo: https://github.com/bridgecrewio/checkov
rev: '2.5.0'
hooks:
- id: checkov
args: ['-d', '.', '--framework', 'terraform', '--hard-fail-on', 'HIGH,CRITICAL']
And in GitHub Actions:
- name: Checkov scan
uses: bridgecrewio/checkov-action@master
with:
directory: infrastructure/
framework: terraform
check: CKV_AWS_2,CKV_AWS_18,CKV_AWS_21 # Specific checks
soft_fail: false
output_format: sarif
output_file_path: checkov.sarif
- name: Upload SARIF results
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: checkov.sarif
Workspace Isolation
Each environment (production, staging, dev) should be a separate Terraform Cloud workspace with separate:
- State files (blast radius containment)
- Credentials (production credentials never in dev workspaces)
- Team permissions (stricter for production)
- Sentinel policy sets (stricter for production)
For large organizations, use Terraform Cloud's Projects feature to organize workspaces and apply team permissions at the project level.
The security chain is: Checkov (pre-commit) → Sentinel (pre-apply) → Encrypted remote state → Dynamic credentials → Audit logging. Each layer catches a different category of security issue, and together they make Terraform a security asset rather than a liability.