Skip to main content

Insecure Access Control

Why is this important?โ€‹

Access Control is one of the most fundamental security requirements. Any problem with managing access control can allow attackers to bypass business logic and access data from other users. In the context of Terraform, this can usually be remediated by making changes to the configuration representing the desired state of your infrastructure.

Check out this video for a high-level explanation:

Access Control Issues

BigQuery Dataset Is Publicโ€‹

BigQuery dataset is anonymously or publicly accessible.

Option A: Replace Access Special Group AllAuthenticatedUsers with a more restrictive optionโ€‹

access.special_group should not have "allAuthenticatedUsers" assigned.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "google_bigquery_dataset" "vulnerable" {
    dataset_id = "example_dataset"
    friendly_name = "test"
    description = "This is a test description"
    location = "EU"
    default_table_expiration_ms = 3600000

    labels = {
    env = "default"
    }

    access {
    role = "OWNER"
    special_group = "allAuthenticatedUsers"
    }
    }
  2. Modify the config to something like the following:

    resource "google_bigquery_dataset" "not_vulnerable" {
    dataset_id = "example_dataset"
    friendly_name = "test"
    description = "This is a test description"
    location = "EU"
    default_table_expiration_ms = 3600000

    labels = {
    env = "default"
    }

    access {
    role = "OWNER"
    user_by_email = google_service_account.bqowner.email
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

Capabilities Not Dropped for Containersโ€‹

Sees if Kubernetes Drop Capabilities exist to ensure the container's security context.

Option A: Security Context Capabilities Drop must be setโ€‹

The security_context.capabilities.drop must have its value set, thus overriding the default of not dropping any capabilities.

Detailed Instructionsโ€‹

  1. Locate one of the following vulnerable patterns:

    Adding but not dropping capabilities:

    resource "kubernetes_pod" "test1" {
    metadata {
    name = "terraform-example"
    }

    spec {

    container = [
    {
    image = "nginx:1.7.9"
    name = "example"

    security_context = {
    capabilities = {
    add = ["NET_BIND_SERVICE"]
    }
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    },

    {
    image = "nginx:1.7.9"
    name = "example2"

    security_context = {
    capabilities = {
    drop = ["ALL"]
    }
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    }
    ]

    dns_config {
    nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
    searches = ["example.com"]

    option {
    name = "ndots"
    value = 1
    }

    option {
    name = "use-vc"
    }
    }

    dns_policy = "None"
    }
    }

    No security_context.capabilities.drop:

    resource "kubernetes_pod" "test2" {
    metadata {
    name = "terraform-example"
    }

    spec {

    container = [
    {
    image = "nginx:1.7.9"
    name = "example"

    security_context = {
    allow_privilege_escalation = false
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    },

    {
    image = "nginx:1.7.9"
    name = "example2"

    security_context = {
    capabilities = {
    drop = ["ALL"]
    }
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    }
    ]

    dns_config {
    nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
    searches = ["example.com"]

    option {
    name = "ndots"
    value = 1
    }

    option {
    name = "use-vc"
    }
    }

    dns_policy = "None"
    }
    }

    No security_context at all:

    resource "kubernetes_pod" "test3" {
    metadata {
    name = "terraform-example"
    }

    spec {

    container = [
    {
    image = "nginx:1.7.9"
    name = "example"



    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    },

    {
    image = "nginx:1.7.9"
    name = "example2"

    security_context = {
    capabilities = {
    drop = ["ALL"]
    }
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    }
    ]

    dns_config {
    nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
    searches = ["example.com"]

    option {
    name = "ndots"
    value = 1
    }

    option {
    name = "use-vc"
    }
    }

    dns_policy = "None"
    }
    }

  2. Modify the config to something like the following, thus dropping capabilities:

    resource "kubernetes_pod" "negative4" {
    metadata {
    name = "terraform-example"
    }

    spec {

    container = [
    {
    image = "nginx:1.7.9"
    name = "example"

    security_context = {
    capabilities = {
    drop = ["ALL"]
    }
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    },

    {
    image = "nginx:1.7.9"
    name = "example2"

    security_context = {
    capabilities = {
    drop = ["ALL"]
    }
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    }
    ]

    dns_config {
    nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
    searches = ["example.com"]

    option {
    name = "ndots"
    value = 1
    }

    option {
    name = "use-vc"
    }
    }

    dns_policy = "None"
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

Cloud Storage Bucket Is Publicly Accessibleโ€‹

Cloud Storage Bucket is anonymously or publicly accessible.

Option A: Remove All membersโ€‹

None of the member/members should have a value containing "allUsers" or "allAuthenticatedUsers".

  1. Locate the following vulnerable pattern:

    resource "google_storage_bucket_iam_member" "positive1" {
    bucket = google_storage_bucket.default.name
    role = "roles/storage.admin"
    member = "allUsers"
    }

    resource "google_storage_bucket_iam_member" "positive2" {
    bucket = google_storage_bucket.default.name
    role = "roles/storage.admin"
    members = ["user:[email protected]","allAuthenticatedUsers"]
    }
  2. Modify the config to something like the following:

    resource "google_storage_bucket_iam_member" "negative1" {
    bucket = google_storage_bucket.default.name
    role = "roles/storage.admin"
    member = "user:[email protected]"
    }

    resource "google_storage_bucket_iam_member" "negative2" {
    bucket = google_storage_bucket.default.name
    role = "roles/storage.admin"
    members = ["user:[email protected]","user:[email protected]"]
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

Containers With Sys Admin Capabilitiesโ€‹

Containers should not have CAP_SYS_ADMIN Linux capability.

Option A: Security Context Capabilities should not add Sys Adminโ€‹

kubernetes_pod spec.containers[n].security_context.capabilities.add should not include "SYS_ADMIN".

  1. Locate the following vulnerable pattern:

    resource "kubernetes_pod" "positive1" {
    metadata {
    name = "terraform-example"
    }

    spec {
    container = [
    {
    image = "nginx:1.7.9"
    name = "example"

    security_context = {
    capabilities = {
    add = ["SYS_ADMIN"]
    }
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }
    initial_delay_seconds = 3
    period_seconds = 3
    }
    },
    {
    image = "nginx:1.7.9"
    name = "example22222"

    security_context = {
    capabilities = {
    add = ["SYS_ADMIN"]
    }
    }

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }
    initial_delay_seconds = 3
    period_seconds = 3
    }
    }
    ]
    // ...
    }
    }

    resource "kubernetes_pod" "positive2" {
    metadata {
    name = "terraform-example"
    }

    spec {
    container {
    image = "nginx:1.7.9"
    name = "example"

    security_context {
    capabilities {
    add = ["SYS_ADMIN"]
    }
    }

    env {
    name = "environment"
    value = "test"
    }

    port {
    container_port = 8080
    }

    liveness_probe {
    http_get {
    path = "/nginx_status"
    port = 80

    http_header {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    }
    // ...
    }
    }
  2. Modify the config to something like the following:

    resource "kubernetes_pod" "negative3" {
    metadata {
    name = "terraform-example"
    }

    spec {
    container = [
    {
    image = "nginx:1.7.9"
    name = "example"

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }
    initial_delay_seconds = 3
    period_seconds = 3
    }
    },
    {
    image = "nginx:1.7.9"
    name = "example2"

    env = {
    name = "environment"
    value = "test"
    }

    port = {
    container_port = 8080
    }

    liveness_probe = {
    http_get = {
    path = "/nginx_status"
    port = 80

    http_header = {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }
    initial_delay_seconds = 3
    period_seconds = 3
    }
    }
    ]
    // ...
    }
    }

    resource "kubernetes_pod" "negative4" {
    metadata {
    name = "terraform-example"
    }

    spec {
    container {
    image = "nginx:1.7.9"
    name = "example"

    env {
    name = "environment"
    value = "test"
    }

    port {
    container_port = 8080
    }

    liveness_probe {
    http_get {
    path = "/nginx_status"
    port = 80

    http_header {
    name = "X-Custom-Header"
    value = "Awesome"
    }
    }

    initial_delay_seconds = 3
    period_seconds = 3
    }
    }
    // ...
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

EC2 Instance Using API Keysโ€‹

EC2 instances should use roles to be granted access to other AWS services.

Option A: Remove API keys from EC2 instancesโ€‹

  • Do not include API keys in user_data whether encoded or not
  • Do not include API keys in an EC2 instance by using a "remote-exec" provisioner
  • Do not include API keys in an EC2 instance by using a "file" provisioner
  1. Locate any of the following vulnerable patterns:

    Do not insert API keys into user_data as environment variables:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive1" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    user_data = <<EOF
    #!/bin/bash
    apt-get install -y awscli
    export AWS_ACCESS_KEY_ID=your_access_key_id_here
    export AWS_SECRET_ACCESS_KEY=your_secret_access_key_here
    EOF

    credit_specification {
    cpu_credits = "unlimited"
    }
    }

    Do not insert API keys in user_data so that they are inserted into configuration files:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive2" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    user_data = <<EOT
    #!/bin/bash
    apt-get install -y awscli
    cat << EOF > ~/.aws/config
    [default]
    aws_access_key_id = somekey
    aws_secret_access_key = somesecret
    EOF
    EOT

    credit_specification {
    cpu_credits = "unlimited"
    }
    }

    Do not insert API keys into user_data so that they are inserted into credentials files:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive3" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    user_data = <<EOT
    #!/bin/bash
    apt-get install -y awscli
    cat << EOF > ~/.aws/credentials
    [default]
    aws_access_key_id = somekey
    aws_secret_access_key = somesecret
    EOF
    EOT

    credit_specification {
    cpu_credits = "unlimited"
    }
    }

    Do not insert API keys into user_data_base64. Encoding provides no security:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive4" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    user_data_base64 = var.init_aws_cli

    credit_specification {
    cpu_credits = "unlimited"
    }
    }

    Or a similar example using user_data_base64:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive5" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    user_data_base64 = base64encode("apt-get install -y awscli; export AWS_ACCESS_KEY_ID=your_access_key_id_here; export AWS_SECRET_ACCESS_KEY=your_secret_access_key_here")

    credit_specification {
    cpu_credits = "unlimited"
    }
    }

    Do not insert API keys into user_data by inserting them as environment variables into a shell configuration file:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive6" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    user_data = <<EOT
    #cloud-config
    repo_update: true
    repo_upgrade: all

    packages:
    - awscli

    runcmd:
    - [ sh, -c, "echo export AWS_ACCESS_KEY_ID=my-key-id >> ~/.bashrc" ]
    - [ sh, -c, "echo export AWS_SECRET_ACCESS_KEY=my-secret >> ~/.bashrc" ]
    EOT

    credit_specification {
    cpu_credits = "unlimited"
    }
    }

    Do not insert API keys into EC2 instance credentials file using "remote-exec" provisioner:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive7" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    provisioner "remote-exec" {
    inline = [
    "wget -O - http://config.remote.server.com/aws-credentials > ~/.aws/credentials;"
    ]
    }

    credit_specification {
    cpu_credits = "unlimited"
    }
    }

    Do not insert API keys into EC2 instance credentials file using "file" provisioner:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive8" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    provisioner "file" {
    source = "conf/aws-credentials"
    destination = "~/.aws/credentials"
    }
    }

    Do not insert API keys into EC2 instance shell configuration file using "remote-exec" provisioner:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "positive9" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    provisioner "remote-exec" {
    inline = [
    "echo export AWS_ACCESS_KEY_ID=my-key-id >> ~/.bashrc",
    "echo export AWS_SECRET_ACCESS_KEY=my-secret >> ~/.bashrc"
    ]
    }

    credit_specification {
    cpu_credits = "unlimited"
    }
    }
  2. Modify the config to something like the following:

    Using a role via iam_instance_profile:

    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_iam_role_policy_attachment" "test_attach" {
    roles = [aws_iam_role.test_role.name]
    policy_arn = aws_iam_policy.test_policy.arn
    }

    resource "aws_iam_policy" "test_policy" {
    name = "test_policy"
    description = "test policy"
    path = "/"
    policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": [
    "s3:Get*",
    "s3:List*"
    ],
    "Effect": "Allow",
    "Resource": "*"
    }
    ]
    }
    EOF
    }

    resource "aws_iam_role" "test_role" {
    name = "test_role"
    path = "/"

    assume_role_policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": "sts:AssumeRole",
    "Principal": {
    "Service": "ec2.amazonaws.com"
    },
    "Effect": "Allow",
    "Sid": ""
    }
    ]
    }
    EOF
    }

    resource "aws_iam_instance_profile" "test_profile" {
    name = "test_profile"
    role = aws_iam_role.role.name
    }

    resource "aws_instance" "negative1" {
    ami = "ami-005e54dee72cc1d00" # us-west-2
    instance_type = "t2.micro"

    tags = {
    Name = "test"
    }

    iam_instance_profile = aws_iam_instance_profile.test_profile.name

    credit_specification {
    cpu_credits = "unlimited"
    }
    }

    Or just remove them altogether:

    module "ec2_instance" {
    source = "terraform-aws-modules/ec2-instance/aws"
    version = "~> 3.0"

    name = "single-instance"

    ami = "ami-ebd02392"
    instance_type = "t2.micro"
    key_name = "user1"
    monitoring = true
    vpc_security_group_ids = ["sg-12345678"]
    subnet_id = "subnet-eddcdzz4"

    tags = {
    Terraform = "true"
    Environment = "dev"
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

EC2 Instance Using Default Security Groupโ€‹

EC2 instances should not use default security group(s) (Security Group(s) with a name of "default").

Option A: Remove any Default Security Groupsโ€‹

Remove any default security groups from the configuration. Add non-default security groups if needed.

  1. Locate the following vulnerable pattern:

    resource "aws_instance" "positive1" {
    ami = "ami-003634241a8fcdec0"
    instance_type = "t3.micro"
    // ...
    security_groups = [aws_security_group.default.id]
    }

    Or:

    resource "aws_instance" "positive2" {
    ami = "ami-003634241a8fcdec0"
    instance_type = "t2.micro"
    // ...
    vpc_security_group_ids = [aws_security_group.default.id]
    }
  2. Modify the config to something like the following:

    resource "aws_instance" "negative1" {
    ami = "ami-003634241a8fcdec0"
    instance_type = "t3.micro"
    // ...
    security_groups = [aws_security_group.sg.id]
    }

    Or:

    resource "aws_instance" "negative2" {
    ami = "ami-003634241a8fcdec0"
    instance_type = "t2.micro"
    // ...
    vpc_security_group_ids = [aws_security_group.sg.id]
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

Google Project IAM Member Service Account Has Admin Roleโ€‹

Verifies that Google Project IAM Member Service Account doesn't have an Admin Role associated.

Option A: Remove Service Account Adminโ€‹

The value of google_project_iam_member.member must not start with "serviceAccount:" or any given value in the google_project_iam_member.members list must not start with "serviceAccount:" and the value of google_project_iam_member.role must not contain "roles/iam.serviceAccountAdmin".

  1. Locate the following vulnerable pattern:

    resource "google_project_iam_member" "positive1" {
    project = "your-project-id"
    role = "roles/iam.serviceAccountAdmin"
    member = "serviceAccount:[email protected]"
    }

    resource "google_project_iam_member" "positive2" {
    project = "your-project-id"
    role = "roles/iam.serviceAccountAdmin"
    members = ["user:[email protected]", "serviceAccount:[email protected]"]
    }
  2. Modify the config to something like the following:

    resource "google_project_iam_member" "negative1" {
    project = "your-project-id"
    role = "roles/editor"
    members = "user:[email protected]"
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

IAM Policies With Full Privilegesโ€‹

IAM policies should not allow full administrative privileges (for all resources).

Option A: Remove All actions from Policy Statements with Resource Allโ€‹

If policy.Statement.Resource has a value of "*" (All) policy.Statement.Action must not have a value of "*" (All).

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "aws_iam_role_policy" "positive1" {
    name = "apigateway-cloudwatch-logging"
    role = aws_iam_role.apigateway_cloudwatch_logging.id

    policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Action": ["*"],
    "Resource": "*"
    }
    ]
    }
    EOF
    }
  2. Modify the config to something like the following where a single specific action is specified:

    resource "aws_iam_role_policy" "negative1" {
    name = "apigateway-cloudwatch-logging"
    role = aws_iam_role.apigateway_cloudwatch_logging.id

    policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Action": ["some:action"],
    "Resource": "*"
    }
    ]
    }
    EOF
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

IAM Policy Grants AssumeRole Permission Across All Servicesโ€‹

IAM role should not allow All ("*") services or Principals to assume it.

Option A: Remove the All Specifier from the Principalโ€‹

assume_rule_policy.Statement.Principal.AWS should not have a value of "*" (All).

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    //  Create a role which OpenShift instances will assume.
    // This role has a policy saying it can be assumed by ec2
    // instances.
    resource "aws_iam_role" "positive1" {
    name = "${var.name_tag_prefix}-openshift-instance-role"

    assume_role_policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": "sts:AssumeRole",
    "Principal": {
    "Service": "ec2.amazonaws.com",
    "AWS": "*"
    },
    "Effect": "Allow",
    "Sid": ""
    }
    ]
    }
    EOF
    }

    // This policy allows an instance to forward logs to CloudWatch, and
    // create the Log Stream or Log Group if it doesn't exist.
    resource "aws_iam_policy" "positive3" {
    name = "${var.name_tag_prefix}-openshift-instance-forward-logs"
    path = "/"
    description = "Allows an instance to forward logs to CloudWatch"

    policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Action": [
    "logs:CreateLogGroup",
    "logs:CreateLogStream",
    "logs:PutLogEvents",
    "logs:DescribeLogStreams"
    ],
    "Resource": [
    "arn:aws:logs:*:*:*"
    ]
    }
    ]
    }
    EOF
    }


    // Attach the policies to the role.
    resource "aws_iam_policy_attachment" "positive4" {
    name = "${var.name_tag_prefix}-openshift-attachment-forward-logs"
    roles = ["${aws_iam_role.openshift-instance-role.name}"]
    policy_arn = "${aws_iam_policy.openshift-policy-forward-logs.arn}"
    }

    // Create a instance profile for the role.
    resource "aws_iam_instance_profile" "positive5" {
    name = "${var.name_tag_prefix}-openshift-instance-profile"
    role = "${aws_iam_role.openshift-instance-role.name}"
    }
  2. Modify the config to something like the following:

    //  Create a role which OpenShift instances will assume.
    // This role has a policy saying it can be assumed by ec2
    // instances.
    resource "aws_iam_role" "negative1" {
    name = "${var.name_tag_prefix}-openshift-instance-role"

    assume_role_policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": "sts:AssumeRole",
    "Principal": {
    "Service": "ec2.amazonaws.com"
    },
    "Effect": "Allow",
    "Sid": ""
    }
    ]
    }
    EOF
    }

    // This policy allows an instance to forward logs to CloudWatch, and
    // create the Log Stream or Log Group if it doesn't exist.
    resource "aws_iam_policy" "negative2" {
    name = "${var.name_tag_prefix}-openshift-instance-forward-logs"
    path = "/"
    description = "Allows an instance to forward logs to CloudWatch"

    policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Effect": "Allow",
    "Action": [
    "logs:CreateLogGroup",
    "logs:CreateLogStream",
    "logs:PutLogEvents",
    "logs:DescribeLogStreams"
    ],
    "Resource": [
    "arn:aws:logs:*:*:*"
    ]
    }
    ]
    }
    EOF
    }

    // Attach the policies to the role.
    resource "aws_iam_policy_attachment" "negative3" {
    name = "${var.name_tag_prefix}-openshift-attachment-forward-logs"
    roles = ["${aws_iam_role.openshift-instance-role.name}"]
    policy_arn = "${aws_iam_policy.openshift-policy-forward-logs.arn}"
    }

    // Create a instance profile for the role.
    resource "aws_iam_instance_profile" "negative4" {
    name = "${var.name_tag_prefix}-openshift-instance-profile"
    role = "${aws_iam_role.openshift-instance-role.name}"
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

IAM Policy Grants Full Permissionsโ€‹

IAM policies should not allow All ('*') in a statement action.

Option A: Remove the All Specifier from the Resource Statement of the Policyโ€‹

aws_iam_role_policy.policy.Statement.Resource should not have a value of "*" (All).

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "aws_iam_user" "positive1" {
    name = "${local.resource_prefix.value}-user"
    force_destroy = true

    tags = {
    Name = "${local.resource_prefix.value}-user"
    Environment = local.resource_prefix.value
    }

    }

    resource "aws_iam_access_key" "positive2" {
    user = aws_iam_user.user.name
    }

    resource "aws_iam_user_policy" "positive3" {
    name = "excess_policy"
    user = aws_iam_user.user.name

    policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": [
    "ec2:*",
    "s3:*",
    "lambda:*",
    "cloudwatch:*"
    ],
    "Effect": "Allow",
    "Resource": "*"
    }
    ]
    }
    EOF
    }

    output "username" {
    value = aws_iam_user.user.name
    }

    output "secret" {
    value = aws_iam_access_key.user.encrypted_secret
    }

  2. Modify the config to something like the following:

    resource "aws_iam_user" "negative1" {
    name = "${local.resource_prefix.value}-user"
    force_destroy = true

    tags = {
    Name = "${local.resource_prefix.value}-user"
    Environment = local.resource_prefix.value
    }

    }

    resource "aws_iam_access_key" "negative2" {
    user = aws_iam_user.user.name
    }

    resource "aws_iam_user_policy" "negative3" {
    name = "excess_policy"
    user = aws_iam_user.user.name

    policy = <<EOF
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": [
    "ec2:*",
    "s3:*",
    "lambda:*",
    "cloudwatch:*"
    ],
    "Effect": "Allow",
    "Resource": "SomeResource"
    }
    ]
    }
    EOF
    }

    output "username" {
    value = aws_iam_user.user.name
    }

    output "secret" {
    value = aws_iam_access_key.user.encrypted_secret
    }

  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

Limit Access to AWS Resourcesโ€‹

Option A: Ensure sensitive resources are not publicโ€‹

In the context of Terraform, when a specific resource is marked as publicly accessible, it means that attackers may be able to interact with it. Resources that are identified include the following types:

  • aws_db_instance
  • aws_dms_replication_instance
  • aws_rds_cluster_instance
  • aws_redshift_cluster

Follow the steps below:

  1. Go through the issues that GuardRails identified in the PR

  2. Review the affected resources to determine whether they can be public

    resource "aws_db_instance" "insecure" {
    # ... other configuration ...
    publicly_accessible = true
    }
  3. If not, then either remove the publicly_accessible argument or change it to publicly_accessible = false

  4. Test the changes and ensure that everything is working as expected

  5. Ship it ๐Ÿšข and relax ๐ŸŒด

Option B: Ensure inbound traffic on AWS is restrictedโ€‹

AWS Security Groups can be configured to allow all incoming traffic, which is in violation of the security best practices.

  1. Go through the issues that GuardRails identified in the PR

  2. Review the aws_security_group or aws_security_group_rule resources where cidr_blocks contain /0

    resource "aws_security_group" "allow_tls" {
    name = "allow_tls"
    description = "Allow TLS inbound traffic"
    vpc_id = aws_vpc.main.id

    ingress {
    description = "TLS from VPC"
    from_port = 443
    to_port = 443
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }
    }
  3. Ensure that the cidr_blocks are limited to required ports and IP address ranges

Option C: Ensure inbound traffic on Azure is restrictedโ€‹

Azure Network Security Groups can be configured to allow all incoming traffic, which is in violation of the security best practices.

  1. Go through the issues that GuardRails identified in the PR

  2. Review the azurerm_network_security_rule resources where source_address_prefix contain /0 or *

    resource "azurerm_network_security_rule" "example" {
    name = "test123"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "Tcp"
    source_port_range = "*"
    destination_port_range = "*"
    source_address_prefix = "*"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }
  3. Ensure that the source_address_prefix are limited to required ports and IP address ranges

Option D: Ensure inbound traffic on GCP is restrictedโ€‹

GCP firewalls can be configured to allow all incoming traffic, which is in violation of the security best practices.

  1. Go through the issues that GuardRails identified in the PR

  2. Review the google_compute_firewall resources where source_ranges contain /0

    resource "google_compute_firewall" "project-firewall-allow-ssh" {
    name = "${var.vpc_name}-allow-something"
    network = "${google_compute_network.project-network.self_link}"
    ....
    source_ranges = ["0.0.0.0/0"]
    }
  3. Ensure that the source_ranges is limited to required ports and IP address ranges

Network ACL With Unrestricted Access To RDPโ€‹

RDP (TCP:3389) should not be public in an AWS Network ACL.

Option A: Make sure that RDP port 3389 is not accessible to the world via Network ACLโ€‹

RDP port (3389) ingress should not be accessible to the world (0.0.0.0/0).

Detailed Instructionsโ€‹

  1. Locate one of the following vulnerable patterns or a pattern where the RDP port ingress is open to the world:

    Vulnerable Pattern 1:

    provider "aws" {
    region = "us-east-1"
    }

    terraform {
    required_providers {
    aws = {
    source = "hashicorp/aws"
    version = "~> 3.0"
    }
    }
    }

    resource "aws_network_acl" "positive1" {
    vpc_id = aws_vpc.main.id

    egress = [
    {
    protocol = "tcp"
    rule_no = 200
    action = "allow"
    cidr_block = "10.3.0.0/18"
    from_port = 443
    to_port = 443
    }
    ]

    ingress = [
    {
    protocol = "tcp"
    rule_no = 100
    action = "allow"
    cidr_block = "0.0.0.0/0"
    from_port = 3389
    to_port = 3389
    }
    ]

    tags = {
    Name = "main"
    }
    }

    Vulnerable Pattern 2:

    provider "aws" {
    region = "us-east-1"
    }

    terraform {
    required_providers {
    aws = {
    source = "hashicorp/aws"
    version = "~> 3.0"
    }
    }
    }

    resource "aws_network_acl" "positive2" {
    vpc_id = aws_vpc.main.id

    tags = {
    Name = "main"
    }
    }

    resource "aws_network_acl_rule" "postive2" {
    network_acl_id = aws_network_acl.positive2.id
    rule_number = 100
    egress = false
    protocol = "tcp"
    rule_action = "allow"
    from_port = 3389
    to_port = 3389
    cidr_block = "0.0.0.0/0"
    }
  2. Modify the config to something like the following:

    Replacement Pattern 1:

    provider "aws" {
    region = "us-east-1"
    }

    terraform {
    required_providers {
    aws = {
    source = "hashicorp/aws"
    version = "~> 3.0"
    }
    }
    }

    resource "aws_network_acl" "negative1" {
    vpc_id = aws_vpc.main.id

    egress = [
    {
    protocol = "tcp"
    rule_no = 200
    action = "allow"
    cidr_block = "10.3.0.0/18"
    from_port = 443
    to_port = 443
    }
    ]

    ingress = [
    {
    protocol = "tcp"
    rule_no = 100
    action = "allow"
    cidr_block = "10.3.0.0/18"
    from_port = 3389
    to_port = 3389
    }
    ]

    tags = {
    Name = "main"
    }
    }

    Replacement Pattern 2:

    provider "aws" {
    region = "us-east-1"
    }

    terraform {
    required_providers {
    aws = {
    source = "hashicorp/aws"
    version = "~> 3.0"
    }
    }
    }

    resource "aws_network_acl" "negative2" {
    vpc_id = aws_vpc.main.id

    tags = {
    Name = "main"
    }
    }

    resource "aws_network_acl_rule" "negative2" {
    network_acl_id = aws_network_acl.negative2.id
    rule_number = 100
    egress = false
    protocol = "tcp"
    rule_action = "allow"
    from_port = 3389
    to_port = 3389
    cidr_block = "10.3.0.0/18"
    }

    More Replacement Patterns:

    module "vpc" {
    source = "terraform-aws-modules/vpc/aws"
    version = "3.7.0"

    name = "my-vpc"
    cidr = "10.0.0.0/16"

    azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
    private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
    public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

    enable_nat_gateway = true
    enable_vpn_gateway = true

    tags = {
    Terraform = "true"
    Environment = "dev"
    }
    }
    module "vpc" {
    source = "terraform-aws-modules/vpc/aws"
    version = "3.7.0"

    name = "my-vpc"
    cidr = "10.0.0.0/16"

    azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
    private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
    public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

    default_network_acl_ingress = [
    {
    "action" : "allow",
    "cidr_block" : "0.0.0.0/0",
    "from_port" : 0,
    "protocol" : "-1",
    "rule_no" : 100,
    "to_port" : 0
    },
    {
    "action" : "allow",
    "cidr_block" : "10.3.0.0/18",
    "from_port" : 0,
    "protocol" : "-1",
    "rule_no" : 3389,
    "to_port" : 0
    }
    ]

    enable_nat_gateway = true
    enable_vpn_gateway = true

    tags = {
    Terraform = "true"
    Environment = "dev"
    }
    }
  3. Test it

  4. Although the (Modify the config to something like the following) patterns that have been presented above will pass this rule a much better option if possible is to remove the RDP port and use a VPN or SSH tunnel to pass all RDP data

  5. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

OSLogin Disabledโ€‹

Verifies that the OSLogin is enabled. Setting enable-oslogin in project-wide metadata makes sure that all of the instances in your project are conforming to the specific value (true or false).

After you enable OS Login on one or more instances in your project, those VMs accept connections only from user accounts that have the necessary IAM roles in your project or organization.

Enabling Compute Engine OS Login for a project ensures that SSH keys used to access instances are mapped to IAM users. If access is revoked for an IAM user, associated SSH keys are revoked as well. This streamlines handling compromised SSH key pairs and the process for revoking access.

Option A: Make sure Google Compute Project Metadata Enable OSLogin is not false or undefinedโ€‹

Locate google_compute_project_metadata.metadata and set the enable-oslogin property value to true.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "google_compute_project_metadata" "positive1" {
    metadata = {
    enable-oslogin = false
    }
    }

    resource "google_compute_project_metadata" "positive2" {
    metadata = {
    foo = "bar"
    }
    }
  2. Modify the config to something like the following:

    resource "google_compute_project_metadata" "negative1" {
    metadata = {
    enable-oslogin = true
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

RDP Access Is Not Restrictedโ€‹

Check if the Google compute firewall allows unrestricted RDP access (port 3389):

  • allow.protocol with value "tcp" or "udp"
  • allow.ports contains port 3389 somewhere in its range
  • source_ranges contains one or both of "0.0.0.0/0" (IPv4 all) | "::/0" (IPv6 all)

Option A: Remove Google Cloud Platform (GCP) Compute Firewall Rule Allowing Unrestricted RDP Ingress Trafficโ€‹

There should not be a google_compute_firewall resource with direction property value set or defaulting to "INGRESS" allowing ports "3389" with unrestricted source_ranges "0.0.0.0/0" (IPv4 all) | "::/0" (IPv6 all) and protocol with value "tcp" or "udp".

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "google_compute_firewall" "positive1" {
    name = "test-firewall"
    network = google_compute_network.default.name
    direction = "INGRESS"

    allow {
    protocol = "icmp"
    }

    allow {
    protocol = "tcp"
    ports = ["80", "8080", "1000-2000","3389"]
    }

    source_tags = ["web"]
    source_ranges = ["0.0.0.0/0"]
    }

    resource "google_compute_firewall" "positive2" {
    name = "test-firewall"
    network = google_compute_network.default.name

    allow {
    protocol = "udp"
    ports = ["80", "8080", "1000-2000","21-3390"]
    }

    source_tags = ["web"]
    source_ranges = ["::/0"]
    }
  2. Modify the config to something like the following:

    resource "google_compute_firewall" "negative1" {
    name = "test-firewall"
    network = google_compute_network.default.name

    allow {
    protocol = "icmp"
    }

    allow {
    protocol = "tcp"
    ports = ["80", "8080", "1000-2000"]
    }

    source_tags = ["web"]
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

RDP Is Exposed To The Internetโ€‹

Port 3389 (Remote Desktop) should not be exposed to the internet.

Option A: Make sure that RDP is not exposed to the worldโ€‹

The following properties of azurerm_network_security_rule and their values as specified should not be combined:

  • destination_port_range must not allow RDP port 3389
  • protocol must not be "UDP" or "TCP"
  • access must not be "Allow"
  • source_address_prefix: must now allow all (all can come in several forms)

Detailed Instructionsโ€‹

  1. Locate the following vulnerable patterns:

    Vulnerable Pattern 1:

    resource "azurerm_network_security_rule" "positive1" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3389"
    source_address_prefix = "*"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 2:

    resource "azurerm_network_security_rule" "positive2" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3389-3390"
    source_address_prefix = "*"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 3:

    resource "azurerm_network_security_rule" "positive3" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3388-3389"
    source_address_prefix = "*"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 4:

    resource "azurerm_network_security_rule" "positive4" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3389"
    source_address_prefix = "0.0.0.0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 5:

    resource "azurerm_network_security_rule" "positive5" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3389,3391"
    source_address_prefix = "34.15.11.3/0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 6:

    resource "azurerm_network_security_rule" "positive6" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3389"
    source_address_prefix = "/0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 7:

    resource "azurerm_network_security_rule" "positive7" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3388-3390, 23000"
    source_address_prefix = "internet"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 8:

    resource "azurerm_network_security_rule" "positive8" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3387, 3389 , 3391 "
    source_address_prefix = "any"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 9:

    resource "azurerm_network_security_rule" "positive9" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "*"
    source_port_range = "*"
    destination_port_range = "3388, 3389-3390,2250"
    source_address_prefix = "/0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Vulnerable Pattern 10:

    resource "azurerm_network_security_rule" "positive10" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "*"
    source_port_range = "*"
    destination_port_range = "111-211, 2000-4430, 1-2 , 3"
    source_address_prefix = "internet"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }
  2. Modify the config to something more like the following:

    Replacement Pattern 1:

    resource "azurerm_network_security_rule" "negative1" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Deny"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3389"
    source_address_prefix = "*"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 2:

    resource "azurerm_network_security_rule" "negative2" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "UDP"
    source_port_range = "*"
    destination_port_range = "2000-5000"
    source_address_prefix = "*"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 3:

    resource "azurerm_network_security_rule" "negative3" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "4030-5100"
    source_address_prefix = "0.0.0.0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 4:

    resource "azurerm_network_security_rule" "negative4" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "2100-5300"
    source_address_prefix = "192.168.0.0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 5:

    resource "azurerm_network_security_rule" "negative5" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3389"
    source_address_prefix = "/1"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 6:

    resource "azurerm_network_security_rule" "negative6" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "*"
    source_port_range = "*"
    destination_port_range = "3388"
    source_address_prefix = "/0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 7:

    resource "azurerm_network_security_rule" "negative7" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "UDP"
    source_port_range = "*"
    destination_port_range = "3389"
    source_address_prefix = "internet"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 8:

    resource "azurerm_network_security_rule" "negative8" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "*"
    source_port_range = "*"
    destination_port_range = "3388, 3390,1000-2000"
    source_address_prefix = "any"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 9:

    resource "azurerm_network_security_rule" "negative9" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "UDP"
    source_port_range = "*"
    destination_port_range = "3389"
    source_address_prefix = "/0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Replacement Pattern 10:

    resource "azurerm_network_security_rule" "negative10" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "3389 , 3390"
    source_address_prefix = "0.0.1.0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }

    Another Replacement Pattern:

    resource "azurerm_network_security_rule" "negative11" {
    name = "example"
    priority = 100
    direction = "Inbound"
    access = "Allow"
    protocol = "TCP"
    source_port_range = "*"
    destination_port_range = "338,389"
    source_address_prefix = "0.0.0.0"
    destination_address_prefix = "*"
    resource_group_name = azurerm_resource_group.example.name
    network_security_group_name = azurerm_network_security_group.example.name
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket Access to Any Principalโ€‹

S3 Buckets must not allow Actions From All Principals, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the 'Effect' must not be 'Allow' when there are All Principals.

Option A: Change Effect to Deny when all Principals existโ€‹

policy.Statement should not contain a map with Effect of "Allow" where there are all ("*") Principals.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "aws_s3_bucket_policy" "positive1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  2. Modify the config to something like the following:

    resource "aws_s3_bucket_policy" "negative1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    The policy could also be expressed in other forms

  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket Allows Delete Action From All Principalsโ€‹

S3 Buckets must not allow Delete Action From All Principals, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect must not be "Allow" when the Action is "Delete", for all Principals.

Option A: Make sure to not Allow Delete Action for All Principalsโ€‹

policy.Statement should not contain a map with Effect of "Allow" where there are all ("*") Principals when an Action property with value "s3:DeleteObject" exists.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable patterns:

    Vulnerable Pattern 1:

    resource "aws_s3_bucket_policy" "positive1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:DeleteObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Vulnerable Pattern 2:

    resource "aws_s3_bucket_policy" "positive2" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:DeleteObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Another Vulnerable Pattern:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:DeleteObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  2. Modify the config to something like the following:

    Replacement Pattern 1:

    resource "aws_s3_bucket_policy" "negative1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Replacement Pattern 2:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket Allows Get Action From All Principalsโ€‹

S3 Buckets must not allow Get Action From All Principals, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect must not be "Allow" when the Action is "Get", for all Principals.

Option A: Make sure to not Allow Get Action for All Principalsโ€‹

policy.Statement should not contain a map with Effect of "Allow" where there are all ("*") Principals when an Action property with value "s3:GetObject" exists.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    Vulnerable Pattern 1:

    resource "aws_s3_bucket" "positive1" {
    bucket = "my_tf_test_bucket"
    }

    resource "aws_s3_bucket_policy" "positive2" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    resource "aws_s3_bucket_policy" "positive3" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Vulnerable Pattern 2:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  2. Modify the config to something like the following:

    Replacement Pattern 1:

    resource "aws_s3_bucket" "negative1" {
    bucket = "my_tf_test_bucket"
    }

    resource "aws_s3_bucket_policy" "negative2" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Replacement Pattern 2:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket Allows Put Action From All Principalsโ€‹

S3 Buckets must not allow Put Action From All Principals, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect must not be "Allow" when the Action is "Put", for all Principals.

Option A: Make sure to not Allow Put Action for All Principalsโ€‹

policy.Statement should not contain a map with Effect of "Allow" where there are all ("*") Principals when an Action property with the value "s3:PutObject" exists.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable patterns:

    Vulnerable Pattern 1:

    resource "aws_s3_bucket_policy" "positive1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Vulnerable Pattern 2:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  2. Modify the config to something like the following:

    Replacement Pattern 1:

    resource "aws_s3_bucket_policy" "negative1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Replacement Pattern 2:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket ACL Allows Read Or Write to All Usersโ€‹

S3 bucket with public READ/WRITE access.

Option A: Make sure not to have ACL set to Public Read/Writeโ€‹

  • aws_s3_bucket.acl property should not have value "public-read" or "public-read-write"
  • module "s3_bucket" should not have property acl value "public-read" or "public-read-write"

Detailed Instructionsโ€‹

  1. Locate the following vulnerable patterns:

    Vulnerable pattern resource "public-read":

    resource "aws_s3_bucket" "positive1" {
    bucket = "my-tf-test-bucket"
    acl = "public-read"

    tags = {
    Name = "My bucket"
    Environment = "Dev"
    }

    versioning {
    enabled = true
    }
    }

    Vulnerable pattern resource "public-read-write":

    resource "aws_s3_bucket" "positive2" {
    bucket = "my-tf-test-bucket"
    acl = "public-read-write"

    tags = {
    Name = "My bucket"
    Environment = "Dev"
    }

    versioning {
    enabled = true
    }
    }

    Vulnerable pattern module "public-read":

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "public-read"

    versioning = {
    enabled = true
    }
    }

    Vulnerable pattern module "public-read-write":

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "public-read-write"

    versioning = {
    enabled = true
    }
    }
  2. Modify the config to something like the following:

    Replacement pattern resource "private":

    resource "aws_s3_bucket" "negative1" {
    bucket = "my-tf-test-bucket"
    acl = "private"

    tags = {
    Name = "My bucket"
    Environment = "Dev"
    }

    versioning {
    enabled = true
    }
    }

    Replacement pattern module "private":

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket ACL Allows Read to Any Authenticated Userโ€‹

Misconfigured S3 buckets can leak private information to the entire internet or allow unauthorized data tampering/deletion.

Option A: An S3 Bucket Should Not Have a Permission of Authenticated Readโ€‹

Neither resource aws_s3_bucket.acl or module s3_bucket.acl should have a value of "authenticated-read".

Detailed Instructionsโ€‹

Remove or replace the value "authenticated-read" from any resource aws_s3_bucket.acl or module s3_bucket.acl configuration with a less permissive permission, such as "private".

  1. Locate one of the following vulnerable patterns:

    Vulnerable pattern resource:

    resource "aws_s3_bucket" "positive1" {
    bucket = "my-tf-test-bucket"
    acl = "authenticated-read"

    tags = {
    Name = "My bucket"
    Environment = "Dev"
    }
    }

    Vulnerable pattern module:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "authenticated-read"

    versioning = {
    enabled = true
    }
    }
  2. Modify the config to something like the following:

    Replacement pattern resource:

    resource "aws_s3_bucket" "negative1" {
    bucket = "my-tf-test-bucket"
    acl = "private"

    tags = {
    Name = "My bucket"
    Environment = "Dev"
    }
    }

    Replacement pattern module:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket Allows All Actions From All Principalsโ€‹

S3 Buckets must not allow All Actions (containing "") From All Principals, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect must not be "Allow" when the Action contains "", for all Principals.

Option A: Make sure that S3 Bucket does not allow all actions from all Principalsโ€‹

If an aws_s3_bucket_policy has a Statement with the value of Effect being "Allow" and a Principal and Action that contains "*" take the following action.

Detailed Instructionsโ€‹

  1. Locate one of the following vulnerable patterns:

    Vulnerable pattern resource:

    resource "aws_s3_bucket_policy" "positive1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Vulnerable pattern module:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Vulnerable pattern resource:

    resource "aws_s3_bucket_policy" "positive2" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  2. Modify the config to something like the following:

    Replacement pattern resource:

    resource "aws_s3_bucket_policy" "negative2" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Replacement pattern module:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket Allows List Action From All Principalsโ€‹

S3 Buckets must not allow List ("s3:ListObjects") Action From All Principals, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect must not be "Allow" when the Action is List ("s3:ListObjects"), for all Principals.

Option A: Remove the List Action from a Principal specifying Allโ€‹

If an AWS S3 bucket policy has a Statement with value of Effect being "Allow" and a Principal that contains "*" and Action with value "s3:ListObjects" take the following action.

Detailed Instructionsโ€‹

  1. Remove any S3 bucket policy.statement.action specifying "s3:ListObjects" where S3 bucket policy.Statement.Principal contains a value of '*'

  2. Locate one of the following vulnerable patterns:

    Vulnerable pattern resource:

    resource "aws_s3_bucket_policy" "positive1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:ListObjects",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Vulnerable pattern module:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:ListObjects",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Vulnerable pattern resource:

    resource "aws_s3_bucket_policy" "positive2" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Allow",
    "Principal": {
    "AWS": "*"
    },
    "Action": "s3:ListObjects",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  3. Modify the config to something like the following:

    Replacement pattern resource:

    resource "aws_s3_bucket_policy" "negative1" {
    bucket = aws_s3_bucket.b.id

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }

    Replacement pattern module:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  4. Test it

  5. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket Allows Public Policyโ€‹

S3 bucket should not allow public policy.

Option A: Intentionally set S3 Bucket Block Public Policy to trueโ€‹

If you have an S3 bucket with block_public_policy set to false or not defined at all, whether it be directly or indirectly via aws_s3_bucket_public_access_block, be sure to set the block_public_policy value to true.

Detailed Instructionsโ€‹

block_public_policy must be defined and have its value set to true.

  1. Locate one of the following vulnerable patterns:

    Vulnerable pattern via resource aws_s3_bucket_public_access_block. Also, keep in mind that if block_public_policy is not defined then the bucket is still vulnerable:

    resource "aws_s3_bucket" "positive1" {
    bucket = "example"
    }

    resource "aws_s3_bucket_public_access_block" "positive2" {
    bucket = aws_s3_bucket.example.id

    block_public_acls = true
    block_public_policy = false
    ignore_public_acls = false
    }

    resource "aws_s3_bucket_public_access_block" "positive3" {
    bucket = aws_s3_bucket.example.id

    block_public_acls = true
    ignore_public_acls = false
    }

    Vulnerable pattern via module s3_bucket. Also, keep in mind that if block_public_policy is not defined then the bucket is still vulnerable:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"
    restrict_public_buckets = true
    block_public_acls = true
    block_public_policy = false

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  2. Modify the config to something like the following:

    Replacement pattern via resource aws_s3_bucket_public_access_block:

    resource "aws_s3_bucket" "negative1" {
    bucket = "example"
    }

    resource "aws_s3_bucket_public_access_block" "negative2" {
    bucket = aws_s3_bucket.example.id

    block_public_acls = true
    block_public_policy = true
    ignore_public_acls = false
    }

    Replacement pattern via module s3_bucket:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"
    restrict_public_buckets = true
    block_public_acls = true
    block_public_policy = true

    versioning = {
    enabled = true
    }

    policy = <<POLICY
    {
    "Version": "2012-10-17",
    "Id": "MYBUCKETPOLICY",
    "Statement": [
    {
    "Sid": "IPAllow",
    "Effect": "Deny",
    "Action": "s3:*",
    "Resource": "arn:aws:s3:::my_tf_test_bucket/*",
    "Condition": {
    "IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
    }
    }
    ]
    }
    POLICY
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Bucket With All Permissionsโ€‹

S3 Buckets must not have all permissions, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion.

Option A: Remove the Action All from a Policy Statement Where Effect is Allowโ€‹

Where the Effect is "Allow" the Action should not be or contain All ("*").

Detailed Instructionsโ€‹

  1. Remove or replace any S3 bucket policy.Statement.Action specifying all ('*') where effect is "Allow"

  2. Locate one of the following vulnerable patterns:

    Vulnerable pattern via resource:

    resource "aws_s3_bucket" "positive1" {
    bucket = "S3B_181355"
    acl = "private"

    policy = <<EOF
    {
    "Id": "id113",
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": [
    "s3:*"
    ],
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::S3B_181355/*",
    "Principal": "*"
    }
    ]
    }
    EOF
    }

    Vulnerable pattern via module:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<EOF
    {
    "Id": "id113",
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": [
    "s3:*"
    ],
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::S3B_181355/*",
    "Principal": "*"
    }
    ]
    }
    EOF
    }
  3. Modify the config to something like the following:

    Replacement pattern via resource:

    resource "aws_s3_bucket" "negative1" {
    bucket = "S3B_181355"
    acl = "private"

    policy = <<EOF
    {
    "Id": "id113",
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": [
    "s3:putObject"
    ],
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::S3B_181355/*",
    "Principal": "*"
    }
    ]
    }
    EOF
    }

    Replacement pattern via module:

    module "s3_bucket" {
    source = "terraform-aws-modules/s3-bucket/aws"
    version = "3.7.0"

    bucket = "my-s3-bucket"
    acl = "private"

    versioning = {
    enabled = true
    }

    policy = <<EOF
    {
    "Id": "id113",
    "Version": "2012-10-17",
    "Statement": [
    {
    "Action": [
    "s3:putObject"
    ],
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::S3B_181355/*",
    "Principal": "*"
    }
    ]
    }
    EOF
    }
  4. Test it

  5. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

S3 Buckets Are Publicly Availableโ€‹

Option A: Limit access to S3 bucketsโ€‹

In the context of Terraform, when an S3 bucket has a certain ACL it is publicly accessible. In some cases that doesn't just allow read access to the data of the bucket, but even write access. The following ACLs are flagged:

  • public-read
  • public-read-write
  • website

Follow the steps below:

  1. Go through the issues that GuardRails identified

  2. Review the affected buckets to determine whether the ACLs are correct

    resource "aws_s3_bucket" "b" {
    bucket = "my-tf-test-bucket"
    acl = "public-read-write"

    tags = {
    Name = "My bucket"
    Environment = "Dev"
    }
    }
  3. If not, then either remove the acl argument or change it to the right alternative

  4. Test the changes and ensure that everything is working as expected

  5. Ship it ๐Ÿšข and relax ๐ŸŒด

Security Group is Not Configuredโ€‹

Azure Virtual Network subnet must be configured with a Network Security Group, which means the attribute security_group must be defined and not empty.

Option A: Make sure your subnet has a non-empty Security Groupโ€‹

azure_virtual_network.subnet.security_group should be defined and have a non-empty value.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "azure_virtual_network" "positive1" {
    name = "test-network"
    address_space = ["10.1.2.0/24"]
    location = "West US"

    subnet {
    name = "subnet1"
    address_prefix = "10.1.2.0/25"
    // Notice no security_group defined... so vulnerable.
    }
    }

    resource "azure_virtual_network" "positive2" {
    name = "test-network"
    address_space = ["10.1.2.0/24"]
    location = "West US"

    subnet {
    name = "subnet1"
    address_prefix = "10.1.2.0/25"
    // security_group defined but empty... still vulnerable
    security_group = ""
    }
    }
  2. Modify the config to something like the following:

    resource "azure_virtual_network" "negative1" {
    name = "test-network"
    address_space = ["10.1.2.0/24"]
    location = "West US"

    subnet {
    name = "subnet1"
    address_prefix = "10.1.2.0/25"
    security_group = "a"
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

SQL DB Instance Is Publicly Accessibleโ€‹

Google Cloud SQL instances should not be publicly accessible.

Option A: Make the following changes as necessaryโ€‹

  • authorized_network address must be trusted (not allow all (0.0.0.0/0))
  • ipv4_enabled is disabled and private_network is defined when there are no authorized_networks
  • If there are no authorized_networks ipv4_enabled must be disabled (have a value of false) and private_network must be defined
  • ip_configuration must be defined and allow only trusted networks

Detailed Instructionsโ€‹

  1. Locate one of the following vulnerable patterns:

    resource "google_sql_database_instance" "positive1" {
    name = "master-instance"
    database_version = "POSTGRES_11"
    region = "us-central1"

    settings {
    # Second-generation instance tiers are based on the machine
    # type. See argument reference below.
    tier = "db-f1-micro"
    }
    }

    resource "google_sql_database_instance" "positive2" {
    name = "postgres-instance-2"
    database_version = "POSTGRES_11"

    settings {
    tier = "db-f1-micro"

    ip_configuration {

    authorized_networks {
    name = "pub-network"
    value = "0.0.0.0/0"
    }
    }
    }
    }

    resource "google_sql_database_instance" "positive3" {
    name = "master-instance"
    database_version = "POSTGRES_11"
    region = "us-central1"

    settings {
    # Second-generation instance tiers are based on the machine
    # type. See argument reference below.
    tier = "db-f1-micro"

    ip_configuration {
    ipv4_enabled = true
    }
    }
    }

    resource "google_sql_database_instance" "positive4" {
    name = "master-instance"
    database_version = "POSTGRES_11"
    region = "us-central1"

    settings {
    # Second-generation instance tiers are based on the machine
    # type. See argument reference below.
    tier = "db-f1-micro"

    ip_configuration {}
    }
    }
  2. Modify the config to something like the following replacement patterns:

    resource "google_sql_database_instance" "negative1" {

    name = "private-instance-1"
    database_version = "POSTGRES_11"
    settings {
    ip_configuration {
    ipv4_enabled = false
    private_network = "some_private_network"
    }
    }
    }

    resource "google_sql_database_instance" "negative2" {
    name = "postgres-instance-2"
    database_version = "POSTGRES_11"

    settings {
    tier = "db-f1-micro"

    ip_configuration {

    authorized_networks {

    content {
    name = "some_trusted_network"
    value = "some_trusted_network_address"
    }
    }

    authorized_networks {

    content {
    name = "another_trusted_network"
    value = "another_trusted_network_address"
    }
    }
    }
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

Unlimited Capabilities For Pod Security Policyโ€‹

Limit capabilities for a Pod Security Policy.

Option A: Limit the capabilities available to Kubernetes Podsโ€‹

Provide a list of required_drop_capabilities to the spec of kubernetes_pod_security_policy.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "kubernetes_pod_security_policy" "example" {
    metadata {
    name = "terraform-example"
    }
    spec {
    privileged = false
    allow_privilege_escalation = false

    volumes = [
    "configMap",
    "emptyDir",
    "projected",
    "secret",
    "downwardAPI",
    "persistentVolumeClaim",
    ]

    run_as_user {
    rule = "MustRunAsNonRoot"
    }

    se_linux {
    rule = "RunAsAny"
    }

    supplemental_groups {
    rule = "MustRunAs"
    range {
    min = 1
    max = 65535
    }
    }

    fs_group {
    rule = "MustRunAs"
    range {
    min = 1
    max = 65535
    }
    }

    read_only_root_filesystem = true
    }
    }
  2. Modify the config to something like the following:

    resource "kubernetes_pod_security_policy" "example2" {
    metadata {
    name = "terraform-example"
    }
    spec {
    privileged = false
    allow_privilege_escalation = false
    required_drop_capabilities = ["ALL"]

    volumes = [
    "configMap",
    "emptyDir",
    "projected",
    "secret",
    "downwardAPI",
    "persistentVolumeClaim",
    ]

    run_as_user {
    rule = "MustRunAsNonRoot"
    }

    se_linux {
    rule = "RunAsAny"
    }

    supplemental_groups {
    rule = "MustRunAs"
    range {
    min = 1
    max = 65535
    }
    }

    fs_group {
    rule = "MustRunAs"
    range {
    min = 1
    max = 65535
    }
    }

    read_only_root_filesystem = true
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

Unrestricted Security Group Ingressโ€‹

AWS Security Group should restrict ingress access.

Option A: Make the following changes as necessaryโ€‹

  • aws_security_group_rule of type "ingress" must not have it's cidr_blocks list containing "0.0.0.0/0"
  • aws_security_group must not have it's ingress.cidr_blocks containing "0.0.0.0/0"
  • ingress_cidr_blocks must not contain "0.0.0.0/0"

Detailed Instructionsโ€‹

  1. Locate one of the following vulnerable patterns:

    Vulnerable aws_security_group_rule.cidr_blocks pattern:

    resource "aws_security_group_rule" "positive1" {
    type = "ingress"
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    security_group_id = aws_security_group.default.id
    }

    Vulnerable aws_security_group.ingress.cidr_blocks pattern:

    resource "aws_security_group" "positive2" {
    ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    security_group_id = aws_security_group.default.id
    }
    }

    Vulnerable aws_security_group multiple ingress cidr_blocks pattern:

    resource "aws_security_group" "positive3" {
    ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["1.0.0.0/0"]
    }

    ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }
    }

    Vulnerable ingress_cidr_blocks pattern:

    module "web_server_sg" {
    source = "terraform-aws-modules/security-group/aws"
    version = "4.3.0"

    name = "web-server"
    description = "Security group for web-server with HTTP ports open within VPC"
    vpc_id = "vpc-12345678"

    ingress_cidr_blocks = ["0.0.0.0/0"]
    }

    Vulnerable multiple elements in ingress_cidr_blocks pattern:

    module "web_server_sg" {
    source = "terraform-aws-modules/security-group/aws"
    version = "4.3.0"

    name = "web-server"
    description = "Security group for web-server with HTTP ports open within VPC"
    vpc_id = "vpc-12345678"

    ingress_cidr_blocks = ["10.10.0.0/16", "0.0.0.0/0"]
    }
  2. Modify the config to something like the following:

    Replacement aws_security_group_rule.cidr_blocks pattern:

    resource "aws_security_group_rule" "negative1" {
    type = "ingress"
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["0.0.2.0/0"]
    security_group_id = aws_security_group.default.id
    }

    Replacement aws_security_group.ingress.cidr_blocks pattern:

    resource "aws_security_group" "negative2" {
    ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["0.0.2.0/0"]
    security_group_id = aws_security_group.default.id
    }
    }

    Replacement aws_security_group multiple ingress cidr_blocks pattern:

    resource "aws_security_group" "negative3" {
    ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["1.0.0.0/0"]
    }

    ingress {
    from_port = 3306
    to_port = 3306
    protocol = "tcp"
    cidr_blocks = ["0.0.1.0/0"]
    }
    }

    Replacement ingress_cidr_blocks pattern:

    module "web_server_sg" {
    source = "terraform-aws-modules/security-group/aws"
    version = "4.3.0"

    name = "web-server"
    description = "Security group for web-server with HTTP ports open within VPC"
    vpc_id = "vpc-12345678"

    ingress_cidr_blocks = ["10.10.0.0/16"]
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

VM With Full Cloud Accessโ€‹

A Google VM instance is configured to use the default service account with full access to all Cloud APIs.

Option A: Remove Cloud Platform from Service Account Scopes for Google Cloud Platform (GCP) VMsโ€‹

service_accounts.scopes should not contain cloud-platform.

Detailed Instructionsโ€‹

  1. Locate the following vulnerable pattern:

    resource "google_compute_instance" "positive1" {
    name = "test"
    machine_type = "e2-medium"
    zone = "us-central1-a"

    boot_disk {
    initialize_params {
    image = "debian-cloud/debian-9"
    }
    }

    network_interface {
    network = "default"
    access_config {
    // Ephemeral IP
    }
    }

    service_account {
    scopes = ["userinfo-email", "compute-ro", "storage-ro", "cloud-platform"]
    }
    }
  2. Modify the config to something like the following:

    resource "google_compute_instance" "negative1" {
    name = "test"
    machine_type = "e2-medium"
    zone = "us-central1-a"

    boot_disk {
    initialize_params {
    image = "debian-cloud/debian-9"
    }
    }

    network_interface {
    network = "default"
    access_config {
    // Ephemeral IP
    }
    }

    service_account {
    scopes = ["userinfo-email", "compute-ro", "storage-ro"]
    }
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

VPC Default Security Group Accepts All Trafficโ€‹

Default Security Group attached to every VPC should restrict all traffic.

Option A: Remove any ingress or egress rules from AWS Default Security Groupโ€‹

aws_default_security_group should not have any ingress or egress rules defined.

Detailed Instructionsโ€‹

  1. Locate one of the following vulnerable patterns:

    resource "aws_vpc" "mainvpc" {
    cidr_block = "10.1.0.0/16"
    }

    resource "aws_default_security_group" "default" {
    vpc_id = aws_vpc.mainvpc.id

    ingress = [
    {
    protocol = -1
    self = true
    from_port = 0
    to_port = 0
    }
    ]

    egress = [
    {
    from_port = 0
    to_port = 0
    protocol = "-1"
    }
    ]
    }
    resource "aws_vpc" "mainvpc3" {
    cidr_block = "10.1.0.0/16"
    }

    resource "aws_default_security_group" "default3" {
    vpc_id = aws_vpc.mainvpc3.id

    ingress = [
    {
    protocol = -1
    self = true
    from_port = 0
    to_port = 0
    ipv6_cidr_blocks = ["::/0"]
    }
    ]

    egress = [
    {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    ]
    }
  2. Modify the config to something like the following:

    resource "aws_vpc" "mainvpc2" {
    cidr_block = "10.1.0.0/16"
    }

    resource "aws_default_security_group" "default2" {
    vpc_id = aws_vpc.mainvpc2.id
    }
  3. Test it

  4. Ship it ๐Ÿšข and relax ๐ŸŒด

References:โ€‹

More information:โ€‹