Infrastructure as Code Security: Best Practices for Terraform, CloudFormation, and More

Infrastructure as Code Security: Best Practices for Terraform, CloudFormation, and More

Whitespots Team ·
iac
terraform
cloudformation
devops

Introduction

Infrastructure as Code (IaC) brings automation and consistency to infrastructure management, but it also introduces security risks. Hardcoded secrets, overly permissive policies, and misconfigurations can be replicated across environments instantly. This guide covers security best practices for IaC tools like Terraform, CloudFormation, and Ansible.

Common IaC Security Issues

  1. Hardcoded secrets and credentials
  2. Overly permissive security rules
  3. Unencrypted sensitive data
  4. Missing state file encryption
  5. No code review for infrastructure changes
  6. Lack of security scanning in CI/CD
  7. Publicly accessible resources
  8. Missing backup and versioning

Terraform Security Best Practices

Vulnerable Terraform Code

hcl
# VULNERABLE Terraform configuration # Hardcoded credentials provider "aws" { access_key = "AKIAIOSFODNN7EXAMPLE" secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" region = "us-east-1" } # Public S3 bucket resource "aws_s3_bucket" "data" { bucket = "my-public-bucket" acl = "public-read" # Public access! } # Overly permissive security group resource "aws_security_group" "web" { ingress { from_port = 0 to_port = 65535 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] # Allow all! } } # No encryption resource "aws_db_instance" "main" { engine = "postgres" instance_class = "db.t3.micro" username = "admin" password = "password123" # Hardcoded password # storage_encrypted = false (default) }

Secure Terraform Code

hcl
# SECURE Terraform configuration # Use environment variables or IAM roles provider "aws" { region = var.aws_region # Credentials from environment or IAM role # Never hardcode credentials } # Variables with validation variable "aws_region" { type = string description = "AWS region" default = "us-east-1" validation { condition = can(regex("^[a-z]{2}-[a-z]+-\\d{1}$", var.aws_region)) error_message = "Invalid AWS region format." } } variable "environment" { type = string description = "Environment name" validation { condition = contains(["dev", "staging", "production"], var.environment) error_message = "Environment must be dev, staging, or production." } } # Secure S3 bucket resource "aws_s3_bucket" "data" { bucket = "my-secure-bucket-${var.environment}" tags = { Environment = var.environment Managed_By = "Terraform" } } resource "aws_s3_bucket_public_access_block" "data" { bucket = aws_s3_bucket.data.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } resource "aws_s3_bucket_versioning" "data" { bucket = aws_s3_bucket.data.id versioning_configuration { status = "Enabled" } } resource "aws_s3_bucket_server_side_encryption_configuration" "data" { bucket = aws_s3_bucket.data.id rule { apply_server_side_encryption_by_default { sse_algorithm = "aws:kms" kms_master_key_id = aws_kms_key.s3.arn } bucket_key_enabled = true } } # Restrictive security group resource "aws_security_group" "web" { name_prefix = "web-sg-" description = "Security group for web servers" vpc_id = aws_vpc.main.id # Only allow HTTPS ingress { description = "HTTPS from ALB" from_port = 443 to_port = 443 protocol = "tcp" security_groups = [aws_security_group.alb.id] } # Minimal egress egress { description = "HTTPS to internet" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "web-security-group" } lifecycle { create_before_destroy = true } } # Encrypted database resource "random_password" "db_password" { length = 32 special = true } resource "aws_secretsmanager_secret" "db_password" { name = "${var.environment}/database/master-password" recovery_window_in_days = 30 } resource "aws_secretsmanager_secret_version" "db_password" { secret_id = aws_secretsmanager_secret.db_password.id secret_string = random_password.db_password.result } resource "aws_db_instance" "main" { identifier = "${var.environment}-postgres" engine = "postgres" engine_version = "15.4" instance_class = "db.t3.micro" allocated_storage = 20 max_allocated_storage = 100 storage_encrypted = true kms_key_id = aws_kms_key.rds.arn db_name = "appdb" username = "dbadmin" password = random_password.db_password.result # Security publicly_accessible = false vpc_security_group_ids = [aws_security_group.database.id] db_subnet_group_name = aws_db_subnet_group.main.name # Backups backup_retention_period = 30 backup_window = "03:00-04:00" maintenance_window = "mon:04:00-mon:05:00" # Monitoring enabled_cloudwatch_logs_exports = ["postgresql", "upgrade"] monitoring_interval = 60 monitoring_role_arn = aws_iam_role.rds_monitoring.arn deletion_protection = true skip_final_snapshot = false final_snapshot_identifier = "${var.environment}-postgres-final-snapshot" tags = { Environment = var.environment Managed_By = "Terraform" } }

Terraform Remote State Security

hcl
# backend.tf - Secure state configuration terraform { backend "s3" { bucket = "terraform-state-bucket" key = "production/terraform.tfstate" region = "us-east-1" encrypt = true kms_key_id = "arn:aws:kms:us-east-1:ACCOUNT:key/KEY-ID" dynamodb_table = "terraform-state-lock" # Enable versioning on the bucket # versioning = true # Optional: Use SSO or role assumption # role_arn = "arn:aws:iam::ACCOUNT:role/TerraformRole" } required_version = ">= 1.5.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } } # State bucket creation (separate terraform config) resource "aws_s3_bucket" "terraform_state" { bucket = "terraform-state-bucket" lifecycle { prevent_destroy = true } } resource "aws_s3_bucket_versioning" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id versioning_configuration { status = "Enabled" } } resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id rule { apply_server_side_encryption_by_default { sse_algorithm = "aws:kms" kms_master_key_id = aws_kms_key.terraform_state.arn } } } resource "aws_s3_bucket_public_access_block" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } # DynamoDB for state locking resource "aws_dynamodb_table" "terraform_state_lock" { name = "terraform-state-lock" billing_mode = "PAY_PER_REQUEST" hash_key = "LockID" attribute { name = "LockID" type = "S" } server_side_encryption { enabled = true kms_key_arn = aws_kms_key.terraform_state.arn } point_in_time_recovery { enabled = true } tags = { Name = "terraform-state-lock" } }

Using Terraform Workspaces Securely

bash
# Use workspaces for environment separation terraform workspace new production terraform workspace new staging terraform workspace new development # Switch workspace terraform workspace select production # Use workspace name in configs locals { environment = terraform.workspace } resource "aws_instance" "app" { # Different instance types per environment instance_type = local.environment == "production" ? "t3.large" : "t3.micro" tags = { Environment = local.environment Workspace = terraform.workspace } }

Terraform Security Scanning

Pre-commit Hooks

yaml
# .pre-commit-config.yaml repos: - repo: https://github.com/antonbabenko/pre-commit-terraform rev: v1.83.5 hooks: - id: terraform_fmt - id: terraform_validate - id: terraform_docs - id: terraform_tflint - id: terraform_tfsec args: - --args=--minimum-severity=HIGH - id: terraform_checkov args: - --args=--skip-check CKV_AWS_1 - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.5.0 hooks: - id: check-merge-conflict - id: detect-private-key - id: trailing-whitespace - id: end-of-file-fixer

tfsec Configuration

bash
# Install tfsec brew install tfsec # Scan current directory tfsec . # Scan with specific severity tfsec --minimum-severity HIGH . # Generate report tfsec --format json --out tfsec-report.json . # Scan specific directory tfsec ./terraform/production # Exclude specific checks tfsec --exclude aws-s3-enable-bucket-logging .
yaml
# .tfsec.yml severity_overrides: CKV_AWS_1: WARNING exclude: - aws-s3-enable-bucket-logging # Handled separately minimum_severity: MEDIUM

Checkov for IaC Scanning

bash
# Install checkov pip install checkov # Scan Terraform checkov -d ./terraform # Scan specific file checkov -f main.tf # Skip specific checks checkov -d . --skip-check CKV_AWS_1,CKV_AWS_2 # Output formats checkov -d . --output json checkov -d . --output junitxml > results.xml # Scan CloudFormation checkov -f template.yaml # Scan Kubernetes checkov -f deployment.yaml

CI/CD Integration

yaml
# GitHub Actions - Terraform Security Scanning name: Terraform Security on: pull_request: paths: - 'terraform/**' push: branches: - main jobs: security-scan: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup Terraform uses: hashicorp/setup-terraform@v2 with: terraform_version: 1.5.0 - name: Terraform Format Check run: terraform fmt -check -recursive working-directory: ./terraform - name: Terraform Init run: terraform init -backend=false working-directory: ./terraform - name: Terraform Validate run: terraform validate working-directory: ./terraform - name: Run tfsec uses: aquasecurity/tfsec-action@v1.0.0 with: working_directory: ./terraform soft_fail: false - name: Run Checkov uses: bridgecrewio/checkov-action@master with: directory: ./terraform framework: terraform soft_fail: false output_format: sarif output_file_path: checkov-results.sarif - name: Upload Checkov results uses: github/codeql-action/upload-sarif@v2 if: always() with: sarif_file: checkov-results.sarif - name: Terraform Plan run: terraform plan -out=tfplan working-directory: ./terraform env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - name: Scan Terraform Plan run: | terraform show -json tfplan > tfplan.json checkov -f tfplan.json working-directory: ./terraform

CloudFormation Security

Secure CloudFormation Template

yaml
# SECURE CloudFormation template AWSTemplateFormatVersion: '2010-09-09' Description: 'Secure web application infrastructure' Parameters: Environment: Type: String AllowedValues: - dev - staging - production Default: dev Description: Environment name DBPassword: Type: String NoEcho: true Description: Database password (use Secrets Manager in production) MinLength: 16 Conditions: IsProduction: !Equals [!Ref Environment, production] Resources: # KMS Key KMSKey: Type: AWS::KMS::Key Properties: Description: Encryption key for resources EnableKeyRotation: true KeyPolicy: Version: '2012-10-17' Statement: - Sid: Enable IAM User Permissions Effect: Allow Principal: AWS: !Sub 'arn:aws:iam::${AWS::AccountId}:root' Action: 'kms:*' Resource: '*' # S3 Bucket with security DataBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub '${Environment}-data-bucket' BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: aws:kms KMSMasterKeyID: !GetAtt KMSKey.Arn VersioningConfiguration: Status: Enabled PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true LoggingConfiguration: DestinationBucketName: !Ref LogBucket LogFilePrefix: s3-access-logs/ LifecycleConfiguration: Rules: - Id: DeleteOldVersions Status: Enabled NoncurrentVersionExpirationInDays: 90 # Security Group WebSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Security group for web servers VpcId: !Ref VPC SecurityGroupIngress: - Description: HTTPS from ALB IpProtocol: tcp FromPort: 443 ToPort: 443 SourceSecurityGroupId: !Ref ALBSecurityGroup SecurityGroupEgress: - Description: HTTPS to internet IpProtocol: tcp FromPort: 443 ToPort: 443 CidrIp: 0.0.0.0/0 Tags: - Key: Name Value: !Sub '${Environment}-web-sg' # RDS with encryption DBInstance: Type: AWS::RDS::DBInstance DeletionPolicy: Snapshot Properties: DBInstanceIdentifier: !Sub '${Environment}-postgres' Engine: postgres EngineVersion: '15.4' DBInstanceClass: !If [IsProduction, db.t3.small, db.t3.micro] AllocatedStorage: 20 StorageEncrypted: true KmsKeyId: !GetAtt KMSKey.Arn MasterUsername: dbadmin MasterUserPassword: !Ref DBPassword VPCSecurityGroups: - !Ref DBSecurityGroup DBSubnetGroupName: !Ref DBSubnetGroup PubliclyAccessible: false BackupRetentionPeriod: !If [IsProduction, 30, 7] EnableCloudwatchLogsExports: - postgresql - upgrade DeletionProtection: !If [IsProduction, true, false] Outputs: BucketName: Description: S3 bucket name Value: !Ref DataBucket Export: Name: !Sub '${Environment}-DataBucket' DBEndpoint: Description: Database endpoint Value: !GetAtt DBInstance.Endpoint.Address

CloudFormation Security Scanning

bash
# Install cfn-lint pip install cfn-lint # Scan template cfn-lint template.yaml # Scan with specific rules cfn-lint -i W -t template.yaml # Checkov for CloudFormation checkov -f template.yaml --framework cloudformation

Infrastructure Security Checklist

  • ✅ Never commit secrets to version control
  • ✅ Use encrypted remote state with locking
  • ✅ Enable encryption for all data at rest
  • ✅ Use least privilege IAM policies
  • ✅ Implement security scanning in CI/CD
  • ✅ Use specific versions for providers/modules
  • ✅ Enable logging and monitoring
  • ✅ Use security groups with minimal access
  • ✅ Validate inputs and outputs
  • ✅ Use code review for infrastructure changes
  • ✅ Tag all resources for tracking
  • ✅ Implement backups and disaster recovery
  • ✅ Use separate state per environment
  • ✅ Enable deletion protection for critical resources
  • ✅ Regular security audits of IaC
  • ✅ Use modules for reusable secure patterns

Conclusion

Infrastructure as Code security requires treating infrastructure code with the same rigor as application code. By avoiding hardcoded secrets, implementing security scanning, using encryption, and applying least privilege principles, you build secure and maintainable infrastructure.

IaC security is an ongoing process requiring regular scanning, code reviews, and updates. For comprehensive IaC security assessments and infrastructure reviews, contact the Whitespots team for expert consultation.

Cookie Consent

Our website uses cookies to ensure the best user experience. Cookies help us to:

  • Authorize you

By clicking "Accept All Cookies", you consent to our use of cookies. You can also manage your preferences at any time by visiting our Cookie Settings page.

Learn More Manage Preferences