Database Security

Database Access Control: Roles, Least Privilege, and Secrets Management

How to design database access control using service accounts with minimal permissions, DB proxies like RDS Proxy, HashiCorp Vault's database secrets engine, and safe connection string handling.

August 15, 20258 min readShipSafer Team

The most common path to a serious data breach is not a zero-day exploit. It is an over-privileged service account whose credentials were leaked in a Git commit, a misconfigured environment variable, or an unencrypted backup. Proper database access control removes the blast radius: even if credentials are stolen, an attacker with read-only access to the sessions table cannot exfiltrate your entire user database.

This guide covers the full access control stack: role design, service account provisioning, connection string injection, RDS Proxy for connection pooling and IAM authentication, and HashiCorp Vault's database secrets engine for short-lived dynamic credentials.

The Principle of Least Privilege for Databases

Every application component that connects to a database should have exactly the permissions it needs — nothing more. In practice, this means:

  • API services: SELECT, INSERT, UPDATE, DELETE on specific tables
  • Background job workers: SELECT, INSERT, UPDATE on their own tables; no DROP, no TRUNCATE
  • Migration runners: Full DDL (CREATE, ALTER, DROP) on the target database, but only during deploy windows
  • Read replicas / analytics: SELECT only, on a read replica, not the primary
  • Backup agents: CONNECT + SELECT on all tables (dump), no write access

This is not theoretical. In PostgreSQL:

-- Create dedicated role for the API service
CREATE ROLE api_service LOGIN PASSWORD 'rotate_me_via_vault';

-- Grant connect access to the database
GRANT CONNECT ON DATABASE myapp TO api_service;

-- Grant usage on the schema
GRANT USAGE ON SCHEMA public TO api_service;

-- Grant only the operations this service needs
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLE users, sessions, orders TO api_service;
GRANT SELECT ON TABLE products, categories TO api_service;

-- Grant sequence usage (needed for auto-increment inserts)
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO api_service;

-- Explicitly deny dangerous operations
REVOKE ALL ON TABLE payments FROM api_service;
REVOKE CREATE ON SCHEMA public FROM api_service;

For MySQL/MariaDB:

-- Create per-service account with minimal grants
CREATE USER 'api_service'@'10.0.%' IDENTIFIED BY 'rotate_me_via_vault';

GRANT SELECT, INSERT, UPDATE, DELETE
  ON myapp.users TO 'api_service'@'10.0.%';

GRANT SELECT, INSERT, UPDATE, DELETE
  ON myapp.sessions TO 'api_service'@'10.0.%';

GRANT SELECT
  ON myapp.products TO 'api_service'@'10.0.%';

-- Deny all access to sensitive tables
-- (by not granting, nothing is accessible)

FLUSH PRIVILEGES;

The 'api_service'@'10.0.%' syntax restricts the account to connections from the 10.0.x.x subnet. A credential leaked from the server cannot be used from an attacker's external machine.

Row-Level Security for Multi-Tenant Databases

If multiple tenants share a single database schema, row-level security (RLS) prevents one tenant's service account from reading another tenant's data even if the SQL query doesn't include a tenant filter.

-- Enable RLS on the users table
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
ALTER TABLE users FORCE ROW LEVEL SECURITY;

-- Policy: users can only see rows where tenant_id matches their session variable
CREATE POLICY tenant_isolation ON users
  USING (tenant_id = current_setting('app.tenant_id')::uuid);

-- Application sets the tenant context at connection time
SET LOCAL app.tenant_id = '550e8400-e29b-41d4-a716-446655440000';
SELECT * FROM users;  -- Only returns rows for that tenant

In application code (Node.js with pg):

import { Pool } from 'pg';

const pool = new Pool({ connectionString: process.env.DATABASE_URL });

async function getUsersForTenant(tenantId: string) {
  const client = await pool.connect();
  try {
    await client.query('BEGIN');
    await client.query(
      `SET LOCAL app.tenant_id = '${tenantId}'`
    );
    const result = await client.query('SELECT * FROM users');
    await client.query('COMMIT');
    return result.rows;
  } finally {
    client.release();
  }
}

RDS Proxy: Connection Pooling and IAM Authentication

AWS RDS Proxy sits between your application and RDS/Aurora. It provides two critical security benefits beyond connection pooling: IAM-based authentication (no static database passwords in environment variables) and automatic credential rotation without connection disruption.

Setting Up IAM Authentication

Enable IAM authentication on the RDS instance:

aws rds modify-db-instance \
  --db-instance-identifier myapp-prod \
  --enable-iam-database-authentication \
  --apply-immediately

Create a Secrets Manager secret for the database user:

aws secretsmanager create-secret \
  --name rds/myapp/api_service \
  --secret-string '{"username":"api_service","password":"initial_password"}'

Create the RDS Proxy:

aws rds create-db-proxy \
  --db-proxy-name myapp-proxy \
  --engine-family POSTGRESQL \
  --auth '[{
    "AuthScheme": "SECRETS",
    "SecretArn": "arn:aws:secretsmanager:us-east-1:123456789:secret:rds/myapp/api_service",
    "IAMAuth": "REQUIRED"
  }]' \
  --role-arn arn:aws:iam::123456789:role/rds-proxy-role \
  --vpc-subnet-ids subnet-abc123 subnet-def456 \
  --vpc-security-group-ids sg-abc123

IAM policy for the application:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "rds-db:connect",
      "Resource": "arn:aws:rds-db:us-east-1:123456789:dbuser:prx-abc123/api_service"
    }
  ]
}

Application connection using IAM token (no static password):

import { Signer } from '@aws-sdk/rds-signer';
import { Pool } from 'pg';

const signer = new Signer({
  region: 'us-east-1',
  hostname: 'myapp-proxy.proxy-abc123.us-east-1.rds.amazonaws.com',
  port: 5432,
  username: 'api_service',
});

async function createPool(): Promise<Pool> {
  const token = await signer.getAuthToken();
  return new Pool({
    host: 'myapp-proxy.proxy-abc123.us-east-1.rds.amazonaws.com',
    port: 5432,
    database: 'myapp',
    user: 'api_service',
    password: token,
    ssl: { rejectUnauthorized: true },
  });
}

IAM tokens expire every 15 minutes. Implement token refresh:

class ManagedPool {
  private pool: Pool | null = null;
  private tokenExpiry: Date = new Date(0);

  async getPool(): Promise<Pool> {
    if (!this.pool || new Date() >= this.tokenExpiry) {
      if (this.pool) await this.pool.end();
      this.pool = await createPool();
      this.tokenExpiry = new Date(Date.now() + 14 * 60 * 1000); // 14 min
    }
    return this.pool;
  }
}

HashiCorp Vault Database Secrets Engine

Vault's database secrets engine generates short-lived, per-request database credentials. Instead of a static api_service password that lives forever, each application instance gets a unique credential that expires in minutes or hours.

Configuring Vault for PostgreSQL

# Enable the database secrets engine
vault secrets enable database

# Configure the PostgreSQL connection
vault write database/config/myapp-postgres \
  plugin_name=postgresql-database-plugin \
  allowed_roles="api_service,read_only" \
  connection_url="postgresql://{{username}}:{{password}}@postgres.internal:5432/myapp?sslmode=require" \
  username="vault_admin" \
  password="vault_admin_password"

# Define the api_service role
vault write database/roles/api_service \
  db_name=myapp-postgres \
  creation_statements="
    CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
    GRANT CONNECT ON DATABASE myapp TO \"{{name}}\";
    GRANT USAGE ON SCHEMA public TO \"{{name}}\";
    GRANT SELECT, INSERT, UPDATE, DELETE ON TABLE users, sessions, orders TO \"{{name}}\";
  " \
  revocation_statements="DROP ROLE IF EXISTS \"{{name}}\";" \
  default_ttl="1h" \
  max_ttl="24h"

Requesting Credentials at Runtime

# Application requests credentials at startup
vault read database/creds/api_service
# Returns:
# lease_id: database/creds/api_service/abc123
# lease_duration: 1h
# username: v-api-service-abc123
# password: A1B2C3D4...

In Node.js using the Vault SDK:

import vault from 'node-vault';

const client = vault({
  endpoint: 'https://vault.internal:8200',
  token: process.env.VAULT_TOKEN,
});

async function getDatabaseCredentials() {
  const result = await client.read('database/creds/api_service');
  return {
    username: result.data.username as string,
    password: result.data.password as string,
    leaseId: result.lease_id as string,
    leaseDurationSeconds: result.lease_duration as number,
  };
}

async function createDatabasePool(): Promise<Pool> {
  const creds = await getDatabaseCredentials();

  // Schedule renewal before expiry
  const renewalMs = (creds.leaseDurationSeconds - 60) * 1000;
  setTimeout(() => renewLease(creds.leaseId), renewalMs);

  return new Pool({
    host: 'postgres.internal',
    database: 'myapp',
    user: creds.username,
    password: creds.password,
    ssl: { rejectUnauthorized: true },
  });
}

async function renewLease(leaseId: string): Promise<void> {
  await client.write('sys/leases/renew', {
    lease_id: leaseId,
    increment: 3600,
  });
}

Connection String Security

Connection strings often end up in Git repositories, Docker images, CI logs, and crash dumps. Treat them as secrets.

What Not to Do

# Hardcoded in source — leaks on every git clone
DATABASE_URL=postgresql://api_service:MyPassword123@prod-db.internal/myapp

# Printed in logs — ends up in your log aggregator
console.log(`Connecting to: ${process.env.DATABASE_URL}`);

# In docker-compose committed to the repo
environment:
  DATABASE_URL: postgresql://api_service:MyPassword123@db:5432/myapp

Safe Connection String Patterns

Inject credentials at runtime from a secrets manager, never at build time:

import { SecretsManagerClient, GetSecretValueCommand } from '@aws-sdk/client-secrets-manager';

async function getDatabaseUrl(): Promise<string> {
  const client = new SecretsManagerClient({ region: 'us-east-1' });
  const command = new GetSecretValueCommand({
    SecretId: 'prod/myapp/database',
  });
  const response = await client.send(command);
  const secret = JSON.parse(response.SecretString ?? '{}') as {
    host: string;
    port: number;
    username: string;
    password: string;
    dbname: string;
  };
  return `postgresql://${secret.username}:${encodeURIComponent(secret.password)}@${secret.host}:${secret.port}/${secret.dbname}?sslmode=require`;
}

In Kubernetes, mount secrets as environment variables from a Kubernetes Secret (or better, from External Secrets Operator pulling from Vault/AWS Secrets Manager):

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
        - name: api
          env:
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: db-credentials
                  key: password

Audit Logging

Enable audit logging at the database level to track which service account executed which queries:

-- PostgreSQL: enable pgaudit
ALTER SYSTEM SET pgaudit.log = 'write, ddl';
ALTER SYSTEM SET pgaudit.log_relation = 'on';
SELECT pg_reload_conf();

Review access quarterly:

-- PostgreSQL: list all database users and their privileges
SELECT
  r.rolname AS username,
  r.rolsuper,
  r.rolinherit,
  r.rolcreaterole,
  r.rolcreatedb,
  r.rolcanlogin,
  ARRAY(
    SELECT b.rolname FROM pg_auth_members m
    JOIN pg_roles b ON m.roleid = b.oid
    WHERE m.member = r.oid
  ) AS member_of
FROM pg_roles r
WHERE r.rolcanlogin = true
ORDER BY r.rolname;

Flag and remove accounts that have not been used in 90 days. Rotate all credentials at least annually or immediately on any suspected breach.

database security
access control
least privilege
vault
rds proxy
secrets management

Check Your Security Score — Free

See exactly how your domain scores on DMARC, TLS, HTTP headers, and 25+ other automated security checks in under 60 seconds.