Insecure Access Control
Fixing Insecure Access Control
About Insecure Access Control
What is improper access control?
Improper access control is a vulnerability that occurs when a system does not properly restrict or enforce access to resources, such as files, directories, network resources, or application functions.
Examples of improper access control vulnerabilities include:
- Weak access controls: When access controls are weak or easily bypassed, attackers can gain access to sensitive resources or data by exploiting security weaknesses.
- Insufficient authorization checks: When authorization checks are insufficient, it can allow unauthorized users to access sensitive data or resources, or to perform actions that they are not authorized to do.
- Overly permissive access: When access controls are overly permissive, they can allow users to access resources or data that they do not need, increasing the risk of data breaches or other security incidents.
Check out these videos for a high-level explanation:
Missing function level access control
Missing object level access control
What is the impact of improper access control?
Improper access control can lead to various security threats, such as:
- Data breaches: Improper access control can allow attackers to access sensitive data, leading to data breaches, data loss, or unauthorized access to confidential information.
- Unauthorized access to resources: Attackers can exploit improper access control to gain unauthorized access to resources, such as servers, databases, and applications.
- Account takeover: Attackers can use improper access control to take over user accounts and gain access to sensitive data or resources.
How to prevent improper access control?
Here are some measures that can help ensure proper access control:
- Strong access controls: Implement strong access controls that restrict access to sensitive resources or data based on user roles and permissions.
- Proper user authentication and authorization: Implement proper user authentication and authorization mechanisms to ensure that only authorized users can access sensitive data and resources.
- Input validation and sanitization: Validate and sanitize user input before using it to access internal objects or data. Use regular expressions or input filters to remove or encode any special characters that could be used to access sensitive data or resources.
- Least privilege: Use the principle of least privilege to restrict access to resources to only what is necessary for each user role. This can help prevent attackers from gaining access to resources that they do not need to access.
- Regular security audits: Regularly audit your system for security vulnerabilities, including improper access control vulnerabilities. Use automated tools and manual testing to identify potential issues and fix them before they can be exploited.
References
Taxonomies
- OWASP Top 10 - A01 Broken Access Control
- CWE-284: Improper Access Control
- CWE-285: Improper Authorization
Explanation & Prevention
- OWASP: Broken Access Control
- OWASP: Authorization Testing
- OWASP: ASVS - V4 Access Control
- OWASP: Proactive Controls - C7 Enforce Access Controls
- OWASP: Authorization Cheat Sheet
Related CVEs
Training
In the context of Terraform, this vulnerability class identifies resources/roles with too many permissions.
Lambda Function With Privileged role
AWS Lambda Functions shouldn't have privileged permissions.
Rule-specific references:
Option A: Make sure that privileged permissions do not exist for Lambda Functions
aws_lambda_function.role
should not have any privileged permissions through attached inline policyaws_lambda_function.role
should not have any privileged permissions through attached managed policyaws_lambda_function.role
should not have any privileged permissions
Locate the following vulnerable patterns:
resource "aws_lambda_function" "positivefunction1" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = aws_iam_role.positiverole1.arn
handler = "exports.test"
source_code_hash = filebase64sha256("lambda_function_payload.zip")
runtime = "nodejs12.x"
tags = {
Name = "lambda"
}
environment = {
variables = {
foo = "bar"
}
}
}
resource "aws_lambda_function" "positivefunction2" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = aws_iam_role.positiverole2.arn
handler = "exports.test"
source_code_hash = filebase64sha256("lambda_function_payload.zip")
runtime = "nodejs12.x"
tags = {
Name = "lambda"
}
environment = {
variables = {
foo = "bar"
}
}
}
resource "aws_iam_role" "positiverole1" {
name = "positiverole1"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["some:action"],
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Resource": "*",
"Sid": ""
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_role" "positiverole2" {
name = "positiverole2"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["some:action"],
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Resource": "*",
"Sid": ""
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_role_policy" "positiveinlinepolicy1" {
name = "positiveinlinepolicy1"
role = aws_iam_role.positiverole1.id
# Terraform's "jsonencode" function converts a
# Terraform expression result to valid JSON syntax.
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:Describe*",
"iam:*"
]
Effect = "Allow"
Resource = "*"
},
]
})
}
resource "aws_iam_policy" "positivecustomermanagedpolicy1" {
name = "positivecustomermanagedpolicy1"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*",
"sts:AssumeRole"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_policy" "positivecustomermanagedpolicy2" {
name = "positivecustomermanagedpolicy2"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*",
"sts:AssumeRole"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
# Mapping of customer managed policy defined in this template set
resource "aws_iam_role_policy_attachment" "positiverolepolicyattachment1" {
role = aws_iam_role.positiverole1.name
policy_arn = aws_iam_policy.positivecustomermanagedpolicy1.arn
}
resource "aws_iam_policy_attachment" "positivedirectpolicyattachment1" {
roles = [aws_iam_role.positiverole1.name]
policy_arn = aws_iam_policy.positivecustomermanagedpolicy2.arn
}
# Mapping of pre-existing policy arns
resource "aws_iam_role_policy_attachment" "positiverolepolicyattachment2" {
role = aws_iam_role.positiverole2.name
policy_arn = "arn:aws:iam::policy/positivepreexistingpolicyarn1"
}
resource "aws_iam_policy_attachment" "positivedirectpolicyattachment2" {
roles = [aws_iam_role.positiverole2.name]
policy_arn = "arn:aws:iam::policy/AmazonPersonalizeFullAccess"
}Modify the config to something like the following:
resource "aws_lambda_function" "negativefunction1" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = aws_iam_role.negativerole1.arn
handler = "exports.test"
source_code_hash = filebase64sha256("lambda_function_payload.zip")
runtime = "nodejs12.x"
tags = {
Name = "lambda"
}
environment = {
variables = {
foo = "bar"
}
}
}
resource "aws_lambda_function" "negativefunction2" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = aws_iam_role.negativerole2.arn
handler = "exports.test"
source_code_hash = filebase64sha256("lambda_function_payload.zip")
runtime = "nodejs12.x"
tags = {
Name = "lambda"
}
environment = {
variables = {
foo = "bar"
}
}
}
resource "aws_iam_role" "negativerole1" {
name = "negativerole1"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["some:action"],
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Resource": "*",
"Sid": ""
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_role" "negativerole2" {
name = "negativerole2"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["some:action"],
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Resource": "*",
"Sid": ""
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_role_policy" "negativeinlinepolicy1" {
name = "negativeinlinepolicy1"
role = aws_iam_role.negativerole1.id
# Terraform's "jsonencode" function converts a
# Terraform expression result to valid JSON syntax.
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:Describe*",
"s3:GetObject"
]
Effect = "Allow"
Resource = "*"
},
]
})
}
resource "aws_iam_policy" "negativecustomermanagedpolicy1" {
name = "negativecustomermanagedpolicy1"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_policy" "negativecustomermanagedpolicy2" {
name = "negativecustomermanagedpolicy2"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"lambda:CreateFunction"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
# Mapping of customer managed policy defined in this template set
resource "aws_iam_role_policy_attachment" "negativerolepolicyattachment1" {
role = aws_iam_role.negativerole1.name
policy_arn = aws_iam_policy.negativecustomermanagedpolicy1.arn
}
resource "aws_iam_policy_attachment" "negativedirectpolicyattachment1" {
roles = [aws_iam_role.negativerole1.name]
policy_arn = aws_iam_policy.negativecustomermanagedpolicy2.arn
}
# Mapping of pre-existing policy arns
resource "aws_iam_role_policy_attachment" "negativerolepolicyattachment2" {
role = aws_iam_role.negativerole2.name
policy_arn = "arn:aws:iam::policy/negativepreexistingpolicyarn1"
}
resource "aws_iam_policy_attachment" "negativedirectpolicyattachment2" {
roles = [aws_iam_role.negativerole2.name]
policy_arn = "arn:aws:iam::policy/DenyAll"
}Test it
Ship it 🚢 and relax 🌴
Legacy ABAC permissions
Option A: Enable RBAC
- Go through the issues that GuardRails identified
- Locate either the
enable_legacy_abac
argument in thegoogle_container_cluster
resource - Set
enable_legacy_abac
to false, or remove the argument entirely
PSP With Added Capabilities
Kubernetes Pod Security Policy should not have added capabilities.
Rule-specific references:
Option A: Allowed Capabilities should not be included
resource
kubernetes_pod_security_policy
spec
should not have the optional allowed_capabilities
list.
Locate the following vulnerable pattern:
resource "kubernetes_pod_security_policy" "example" {
metadata {
name = "terraform-example"
}
spec {
allowed_capabilities = ["NET_BIND_SERVICE"]
privileged = false
allow_privilege_escalation = false
volumes = [
"configMap",
"emptyDir",
"projected",
"secret",
"downwardAPI",
"persistentVolumeClaim",
]
run_as_user {
rule = "MustRunAsNonRoot"
}
se_linux {
rule = "RunAsAny"
}
supplemental_groups {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
fs_group {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
read_only_root_filesystem = true
}
}Modify the config to something like the following:
resource "kubernetes_pod_security_policy" "example2" {
metadata {
name = "terraform-example"
}
spec {
privileged = false
allow_privilege_escalation = false
volumes = [
"configMap",
"emptyDir",
"projected",
"secret",
"downwardAPI",
"persistentVolumeClaim",
]
run_as_user {
rule = "MustRunAsNonRoot"
}
se_linux {
rule = "RunAsAny"
}
supplemental_groups {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
fs_group {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
read_only_root_filesystem = true
}
}Test it
Ship it 🚢 and relax 🌴
Service Account with Improper Privileges
Service accounts should not have improper privileges like admin, editor, owner, or write roles.
This rule is violated if any of the following conditions are true:
google_iam_policy
binding.role
should not have admin, editor, owner, or write privileges for "serviceAccount" membergoogle_project_iam_binding
role
does not have admin, editor, owner, or write privileges for "serviceAccount" membergoogle_project_iam_member
role
does not have admin, editor, owner, or write privileges for "serviceAccount" member
Rule-specific references:
Option A: Make sure none of the above conditions are true
From the following vulnerable patterns, replace them with the matching non-vulnerable pattern.
Locate one of the following vulnerable patterns:
Vulnerable
google_iam_policy
pattern:data "google_iam_policy" "admin" {
binding {
role = "roles/editor"
members = [
"serviceAccount:[email protected]",
]
}
}Vulnerable
google_project_iam_binding
pattern:resource "google_project_iam_binding" "project1" {
project = "your-project-id"
role = "roles/container.admin"
members = [
"serviceAccount:[email protected]",
]
condition {
title = "expires_after_2019_12_31"
description = "Expiring at midnight of 2019-12-31"
expression = "request.time < timestamp(\"2020-01-01T00:00:00Z\")"
}
}Vulnerable
google_project_iam_member
pattern:resource "google_project_iam_member" "project2" {
project = "your-project-id"
role = "roles/editor"
member = "serviceAccount:[email protected]"
}Modify the config to one of the following non-vulnerable patterns:
Replacement
google_iam_policy
pattern:data "google_iam_policy" "policy5" {
binding {
role = "roles/apigee.runtimeAgent"
members = [
"user:[email protected]",
]
}
}Replacement
google_project_iam_binding
pattern:resource "google_project_iam_binding" "project3" {
project = "your-project-id"
role = "roles/apigee.runtimeAgent"
members = [
"user:[email protected]",
]
condition {
title = "expires_after_2019_12_31"
description = "Expiring at midnight of 2019-12-31"
expression = "request.time < timestamp(\"2020-01-01T00:00:00Z\")"
}
}Replacement
google_project_iam_member
pattern:resource "google_project_iam_member" "project4" {
project = "your-project-id"
role = "roles/apigee.runtimeAgent"
member = "user:[email protected]"
}Test it
Ship it 🚢 and relax 🌴
ECS Task Definition Network Mode Not Recommended
AWS ECS Task Definition (aws_ecs_task_definition
) Network_Mode (network_mode
) should be "awsvpc" for all Task Definitions. AWS VPCs provide the controls to facilitate a formal process for approving and testing all network connections and changes to the firewall and router configurations.
Rule-specific references:
Option A: Consider changing the Network Mode to AWS VPC
If the aws_ecs_task_definition.network_mode
value is not already set to "awsvpc" then consider changing it to the superior mode. AWS VPC Network Mode can simplify many things and add a lot of security.
Locate the following vulnerable pattern:
resource "aws_ecs_task_definition" "positive1" {
family = "service"
network_mode = "none"
volume {
name = "service-storage"
host_path = "/ecs/service-storage"
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b]"
}
}Modify the config to something like the following:
resource "aws_ecs_task_definition" "negative1" {
family = "service"
network_mode = "awsvpc"
volume {
name = "service-storage"
host_path = "/ecs/service-storage"
}
placement_constraints {
type = "memberOf"
expression = "attribute:ecs.availability-zone in [us-west-2a, us-west-2b]"
}
}Test it
Ship it 🚢 and relax 🌴
EKS node group remote access disabled
EKS node group remote access is disabled when Source Security Groups are missing.
Rule-specific references:
Option A: AWS EKS node group remote access source security groups ids should be defined and not null
resource
aws_eks_node_group
remote_access.source_security_groups_ids
should be defined and not null.
Locate the following vulnerable pattern:
resource "aws_eks_node_group" "positive" {
cluster_name = aws_eks_cluster.example.name
node_group_name = "example"
node_role_arn = aws_iam_role.example.arn
subnet_ids = aws_subnet.example[*].id
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
remote_access {
ec2_ssh_key = "my-rsa-key"
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.example-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.example-AmazonEC2ContainerRegistryReadOnly,
]
}Modify the config to something like the following:
resource "aws_eks_node_group" "negative" {
cluster_name = aws_eks_cluster.example.name
node_group_name = "example"
node_role_arn = aws_iam_role.example.arn
subnet_ids = aws_subnet.example[*].id
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
remote_access {
ec2_ssh_key = "my-rsa-key"
source_security_groups_ids = "sg-213120ASNE"
}
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.example-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.example-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.example-AmazonEC2ContainerRegistryReadOnly,
]
}Test it
Ship it 🚢 and relax 🌴
Container Runs Unmasked
Check if a container has full access (unmasked) to the host’s /proc command, which would allow to retrieve sensitive information and possibly change the kernel parameters in runtime.
Rule-specific references:
Option A: Make sure that Allowed Proc Mount Types contains the value "Default"
If resource
kubernetes_pod_security_policy.spec.allowed_proc_mount_types
contains the value "Unmasked", change it to "Default".
Locate the following vulnerable pattern:
resource "kubernetes_pod_security_policy" "example" {
metadata {
name = "terraform-example"
}
spec {
privileged = false
allow_privilege_escalation = false
allowed_proc_mount_types = ["Unmasked"]
volumes = [
"configMap",
"emptyDir",
"projected",
"secret",
"downwardAPI",
"persistentVolumeClaim",
]
run_as_user {
rule = "MustRunAsNonRoot"
}
se_linux {
rule = "RunAsAny"
}
supplemental_groups {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
fs_group {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
read_only_root_filesystem = true
}
}Modify the config to something like the following:
resource "kubernetes_pod_security_policy" "example" {
metadata {
name = "terraform-example"
}
spec {
privileged = false
allow_privilege_escalation = false
allowed_proc_mount_types = ["Default"]
volumes = [
"configMap",
"emptyDir",
"projected",
"secret",
"downwardAPI",
"persistentVolumeClaim",
]
run_as_user {
rule = "MustRunAsNonRoot"
}
se_linux {
rule = "RunAsAny"
}
supplemental_groups {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
fs_group {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
read_only_root_filesystem = true
}
}Test it
Ship it 🚢 and relax 🌴
Containers With Added Capabilities
Kubernetes Pod should not have extra capabilities allowed.
Rule-specific references:
Option A: Security Context Capabilities Add should not be defined
Remove any security_context.capabilities.add
values from resource
kubernetes_pod.spec
.
Locate the following vulnerable pattern:
resource "kubernetes_pod" "positive" {
metadata {
name = "terraform-example"
}
spec {
container {
image = "nginx:1.7.9"
name = "example"
security_context {
capabilities {
add = ["NET_BIND_SERVICE"]
}
}
env {
name = "environment"
value = "test"
}
port {
container_port = 8080
}
liveness_probe {
http_get {
path = "/nginx_status"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
dns_config {
nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
searches = ["example.com"]
option {
name = "ndots"
value = 1
}
option {
name = "use-vc"
}
}
dns_policy = "None"
}
}Modify the config to something like the following:
resource "kubernetes_pod" "negative" {
metadata {
name = "terraform-example"
}
spec {
container {
image = "nginx:1.7.9"
name = "example"
env {
name = "environment"
value = "test"
}
port {
container_port = 8080
}
liveness_probe {
http_get {
path = "/nginx_status"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
dns_config {
nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
searches = ["example.com"]
option {
name = "ndots"
value = 1
}
option {
name = "use-vc"
}
}
dns_policy = "None"
}
}Test it
Ship it 🚢 and relax 🌴
VM With Full Cloud Access
A Google VM instance is configured to use the default service account with full access to all Cloud APIs.
Rule-specific references:
Option A: Remove Cloud Platform from Service Account Scopes for Google Cloud Platform (GCP) VMs
service_accounts.scopes
should not contain cloud-platform
.
Locate the following vulnerable pattern:
resource "google_compute_instance" "positive1" {
name = "test"
machine_type = "e2-medium"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro", "cloud-platform"]
}
}Modify the config to something like the following:
resource "google_compute_instance" "negative1" {
name = "test"
machine_type = "e2-medium"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
service_account {
scopes = ["userinfo-email", "compute-ro", "storage-ro"]
}
}Test it
Ship it 🚢 and relax 🌴
Unlimited Capabilities For Pod Security Policy
Limit capabilities for a Pod Security Policy.
Rule-specific references:
- Kubernetes Pod Security Policy - Required Drop Capabilities
- Docker Security Quick Reference covers Linux capabilities and many more topics around securing your containers
- Holistic Info-Sec for Web Developers: Capabilities risks, countermeasures
Option A: Limit the capabilities available to Kubernetes Pods
Provide a list of required_drop_capabilities
to the spec
of kubernetes_pod_security_policy
.
Locate the following vulnerable pattern:
resource "kubernetes_pod_security_policy" "example" {
metadata {
name = "terraform-example"
}
spec {
privileged = false
allow_privilege_escalation = false
volumes = [
"configMap",
"emptyDir",
"projected",
"secret",
"downwardAPI",
"persistentVolumeClaim",
]
run_as_user {
rule = "MustRunAsNonRoot"
}
se_linux {
rule = "RunAsAny"
}
supplemental_groups {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
fs_group {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
read_only_root_filesystem = true
}
}Modify the config to something like the following:
resource "kubernetes_pod_security_policy" "example2" {
metadata {
name = "terraform-example"
}
spec {
privileged = false
allow_privilege_escalation = false
required_drop_capabilities = ["ALL"]
volumes = [
"configMap",
"emptyDir",
"projected",
"secret",
"downwardAPI",
"persistentVolumeClaim",
]
run_as_user {
rule = "MustRunAsNonRoot"
}
se_linux {
rule = "RunAsAny"
}
supplemental_groups {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
fs_group {
rule = "MustRunAs"
range {
min = 1
max = 65535
}
}
read_only_root_filesystem = true
}
}Test it
Ship it 🚢 and relax 🌴
S3 Bucket With All Permissions
S3 Buckets must not have all permissions, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion.
Rule-specific references:
Option A: Remove the Action All from a Policy Statement Where Effect is Allow
Where the Effect
is "Allow" the Action
should not be or contain All ("*").
Remove or replace any S3 bucket
policy.Statement.Action
specifying all ('*') whereeffect
is "Allow"Locate one of the following vulnerable patterns:
Vulnerable pattern via
resource
:resource "aws_s3_bucket" "positive1" {
bucket = "S3B_181355"
acl = "private"
policy = <<EOF
{
"Id": "id113",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::S3B_181355/*",
"Principal": "*"
}
]
}
EOF
}Vulnerable pattern via
module
:module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<EOF
{
"Id": "id113",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::S3B_181355/*",
"Principal": "*"
}
]
}
EOF
}Modify the config to something like the following:
Replacement pattern via
resource
:resource "aws_s3_bucket" "negative1" {
bucket = "S3B_181355"
acl = "private"
policy = <<EOF
{
"Id": "id113",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:putObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::S3B_181355/*",
"Principal": "*"
}
]
}
EOF
}Replacement pattern via
module
:module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<EOF
{
"Id": "id113",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:putObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::S3B_181355/*",
"Principal": "*"
}
]
}
EOF
}Test it
Ship it 🚢 and relax 🌴
S3 Bucket Access to Any Principal
S3 Buckets must not allow Actions From All Principal
s, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the 'Effect' must not be 'Allow' when there are All Principal
s.
Rule-specific references:
Option A: Change Effect to Deny when all Principals exist
policy.Statement
should not contain a map with Effect
of "Allow" where there are all ("*") Principal
s.
Locate the following vulnerable pattern:
resource "aws_s3_bucket_policy" "positive1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Modify the config to something like the following:
resource "aws_s3_bucket_policy" "negative1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}The policy could also be expressed in other forms
Test it
Ship it 🚢 and relax 🌴
S3 Bucket Allows Delete Action From All Principals
S3 Buckets must not allow Delete Action From All Principal
s, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect
must not be "Allow" when the Action
is "Delete", for all Principal
s.
Rule-specific references:
Option A: Make sure to not Allow Delete Action for All Principals
policy.Statement
should not contain a map with Effect
of "Allow" where there are all ("*") Principal
s when an Action
property with value "s3:DeleteObject" exists.
Locate the following vulnerable patterns:
Vulnerable Pattern 1:
resource "aws_s3_bucket_policy" "positive1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Vulnerable Pattern 2:
resource "aws_s3_bucket_policy" "positive2" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Another Vulnerable Pattern:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:DeleteObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Modify the config to something like the following:
Replacement Pattern 1:
resource "aws_s3_bucket_policy" "negative1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Replacement Pattern 2:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Test it
Ship it 🚢 and relax 🌴
S3 Bucket Allows Get Action From All Principals
S3 Buckets must not allow Get Action From All Principal
s, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect
must not be "Allow" when the Action
is "Get", for all Principal
s.
Rule-specific references:
Option A: Make sure to not Allow Get Action for All Principals
policy.Statement
should not contain a map with Effect
of "Allow" where there are all ("*") Principal
s when an Action
property with value "s3:GetObject" exists.
Locate the following vulnerable pattern:
Vulnerable Pattern 1:
resource "aws_s3_bucket" "positive1" {
bucket = "my_tf_test_bucket"
}
resource "aws_s3_bucket_policy" "positive2" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}
resource "aws_s3_bucket_policy" "positive3" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Vulnerable Pattern 2:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Modify the config to something like the following:
Replacement Pattern 1:
resource "aws_s3_bucket" "negative1" {
bucket = "my_tf_test_bucket"
}
resource "aws_s3_bucket_policy" "negative2" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Replacement Pattern 2:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Test it
Ship it 🚢 and relax 🌴
S3 Bucket Allows Put Action From All Principals
S3 Buckets must not allow Put Action From All Principal
s, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect
must not be "Allow" when the Action
is "Put", for all Principal
s.
Rule-specific references:
Option A: Make sure to not Allow Put Action for All Principals
policy.Statement
should not contain a map with Effect
of "Allow" where there are all ("*") Principal
s when an Action
property with the value "s3:PutObject" exists.
Locate the following vulnerable patterns:
Vulnerable Pattern 1:
resource "aws_s3_bucket_policy" "positive1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Vulnerable Pattern 2:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Modify the config to something like the following:
Replacement Pattern 1:
resource "aws_s3_bucket_policy" "negative1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Replacement Pattern 2:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Test it
Ship it 🚢 and relax 🌴
S3 Bucket ACL Allows Read Or Write to All Users
S3 bucket with public READ/WRITE access.
Rule-specific references:
Option A: Make sure not to have ACL set to Public Read/Write
aws_s3_bucket.acl
property should not have value "public-read" or "public-read-write"module
"s3_bucket" should not have propertyacl
value "public-read" or "public-read-write"
Locate the following vulnerable patterns:
Vulnerable pattern
resource
"public-read":resource "aws_s3_bucket" "positive1" {
bucket = "my-tf-test-bucket"
acl = "public-read"
tags = {
Name = "My bucket"
Environment = "Dev"
}
versioning {
enabled = true
}
}Vulnerable pattern
resource
"public-read-write":resource "aws_s3_bucket" "positive2" {
bucket = "my-tf-test-bucket"
acl = "public-read-write"
tags = {
Name = "My bucket"
Environment = "Dev"
}
versioning {
enabled = true
}
}Vulnerable pattern
module
"public-read":module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "public-read"
versioning = {
enabled = true
}
}Vulnerable pattern
module
"public-read-write":module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "public-read-write"
versioning = {
enabled = true
}
}Modify the config to something like the following:
Replacement pattern
resource
"private":resource "aws_s3_bucket" "negative1" {
bucket = "my-tf-test-bucket"
acl = "private"
tags = {
Name = "My bucket"
Environment = "Dev"
}
versioning {
enabled = true
}
}Replacement pattern
module
"private":module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
}Test it
Ship it 🚢 and relax 🌴
S3 Bucket ACL Allows Read to Any Authenticated User
Misconfigured S3 buckets can leak private information to the entire internet or allow unauthorized data tampering/deletion.
Rule-specific references:
Option A: An S3 Bucket Should Not Have a Permission of Authenticated Read
Neither resource
aws_s3_bucket.acl
or module
s3_bucket.acl
should have a value of "authenticated-read".
Remove or replace the value "authenticated-read" from any resource
aws_s3_bucket.acl
or module
s3_bucket.acl
configuration with a less permissive permission, such as "private".
Locate one of the following vulnerable patterns:
Vulnerable pattern
resource
:resource "aws_s3_bucket" "positive1" {
bucket = "my-tf-test-bucket"
acl = "authenticated-read"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}Vulnerable pattern
module
:module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "authenticated-read"
versioning = {
enabled = true
}
}Modify the config to something like the following:
Replacement pattern
resource
:resource "aws_s3_bucket" "negative1" {
bucket = "my-tf-test-bucket"
acl = "private"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}Replacement pattern
module
:module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
}Test it
Ship it 🚢 and relax 🌴
S3 Bucket Allows All Actions From All Principals
S3 Buckets must not allow All Actions (containing "") From All Principal
s, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect
must not be "Allow" when the Action
contains "", for all Principal
s.
Rule-specific references:
Option A: Make sure that S3 Bucket does not allow all actions from all Principals
If an aws_s3_bucket_policy
has a Statement
with the value of Effect
being "Allow" and a Principal
and Action
that contains "*" take the following action.
Locate one of the following vulnerable patterns:
Vulnerable pattern
resource
:resource "aws_s3_bucket_policy" "positive1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Vulnerable pattern
module
:module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Vulnerable pattern
resource
:resource "aws_s3_bucket_policy" "positive2" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Modify the config to something like the following:
Replacement pattern
resource
:resource "aws_s3_bucket_policy" "negative2" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Replacement pattern
module
:module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Test it
Ship it 🚢 and relax 🌴
S3 Bucket Allows List Action From All Principals
S3 Buckets must not allow List ("s3:ListObjects") Action
From All Principal
s, as to prevent leaking private information to the entire internet or allow unauthorized data tampering/deletion. This means the Effect
must not be "Allow" when the Action
is List ("s3:ListObjects"), for all Principal
s.
Rule-specific references:
Option A: Remove the List Action from a Principal specifying All
If an AWS S3 bucket policy
has a Statement
with value of Effect
being "Allow" and a Principal
that contains "*" and Action
with value "s3:ListObjects" take the following action.
Remove any S3 bucket
policy.statement.action
specifying "s3:ListObjects" where S3 bucketpolicy.Statement.Principal
contains a value of '*'Locate one of the following vulnerable patterns:
Vulnerable pattern
resource
:resource "aws_s3_bucket_policy" "positive1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListObjects",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Vulnerable pattern
module
:module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:ListObjects",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Vulnerable pattern
resource
:resource "aws_s3_bucket_policy" "positive2" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:ListObjects",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Modify the config to something like the following:
Replacement pattern
resource
:resource "aws_s3_bucket_policy" "negative1" {
bucket = aws_s3_bucket.b.id
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Replacement pattern
module
:module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.7.0"
bucket = "my-s3-bucket"
acl = "private"
versioning = {
enabled = true
}
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}Test it
Ship it 🚢 and relax 🌴
Google Project IAM Member Service Account Has Admin Role
Verifies that Google Project IAM Member Service Account doesn't have an Admin Role associated.
Rule-specific references:
Option A: Remove Service Account Admin
The value of google_project_iam_member.member
must not start with "serviceAccount:" or any given value in the google_project_iam_member.members
list must not start with "serviceAccount:"
and the value of google_project_iam_member.role
must not contain "roles/iam.serviceAccountAdmin".
Locate the following vulnerable pattern:
resource "google_project_iam_member" "positive1" {
project = "your-project-id"
role = "roles/iam.serviceAccountAdmin"
member = "serviceAccount:[email protected]"
}
resource "google_project_iam_member" "positive2" {
project = "your-project-id"
role = "roles/iam.serviceAccountAdmin"
members = ["user:[email protected]", "serviceAccount:[email protected]"]
}Modify the config to something like the following:
resource "google_project_iam_member" "negative1" {
project = "your-project-id"
role = "roles/editor"
members = "user:[email protected]"
}Test it
Ship it 🚢 and relax 🌴
IAM Policies With Full Privileges
IAM policies should not allow full administrative privileges (for all resources).
Rule-specific references:
Option A: Remove All actions from Policy Statements with Resource All
If policy.Statement.Resource
has a value of "*" (All) policy.Statement.Action
must not have a value of "*" (All).
Locate the following vulnerable pattern:
resource "aws_iam_role_policy" "positive1" {
name = "apigateway-cloudwatch-logging"
role = aws_iam_role.apigateway_cloudwatch_logging.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["*"],
"Resource": "*"
}
]
}
EOF
}Modify the config to something like the following where a single specific action is specified:
resource "aws_iam_role_policy" "negative1" {
name = "apigateway-cloudwatch-logging"
role = aws_iam_role.apigateway_cloudwatch_logging.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["some:action"],
"Resource": "*"
}
]
}
EOF
}Test it
Ship it 🚢 and relax 🌴
IAM Policy Grants AssumeRole
Permission Across All Services
IAM role should not allow All ("*") services or Principal
s to assume it.
Rule-specific references:
Option A: Remove the All Specifier from the Principal
assume_rule_policy.Statement.Principal.AWS
should not have a value of "*" (All).
Locate the following vulnerable pattern:
// Create a role which OpenShift instances will assume.
// This role has a policy saying it can be assumed by ec2
// instances.
resource "aws_iam_role" "positive1" {
name = "${var.name_tag_prefix}-openshift-instance-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com",
"AWS": "*"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
// This policy allows an instance to forward logs to CloudWatch, and
// create the Log Stream or Log Group if it doesn't exist.
resource "aws_iam_policy" "positive3" {
name = "${var.name_tag_prefix}-openshift-instance-forward-logs"
path = "/"
description = "Allows an instance to forward logs to CloudWatch"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
}
// Attach the policies to the role.
resource "aws_iam_policy_attachment" "positive4" {
name = "${var.name_tag_prefix}-openshift-attachment-forward-logs"
roles = ["${aws_iam_role.openshift-instance-role.name}"]
policy_arn = "${aws_iam_policy.openshift-policy-forward-logs.arn}"
}
// Create a instance profile for the role.
resource "aws_iam_instance_profile" "positive5" {
name = "${var.name_tag_prefix}-openshift-instance-profile"
role = "${aws_iam_role.openshift-instance-role.name}"
}Modify the config to something like the following:
// Create a role which OpenShift instances will assume.
// This role has a policy saying it can be assumed by ec2
// instances.
resource "aws_iam_role" "negative1" {
name = "${var.name_tag_prefix}-openshift-instance-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
// This policy allows an instance to forward logs to CloudWatch, and
// create the Log Stream or Log Group if it doesn't exist.
resource "aws_iam_policy" "negative2" {
name = "${var.name_tag_prefix}-openshift-instance-forward-logs"
path = "/"
description = "Allows an instance to forward logs to CloudWatch"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
}
// Attach the policies to the role.
resource "aws_iam_policy_attachment" "negative3" {
name = "${var.name_tag_prefix}-openshift-attachment-forward-logs"
roles = ["${aws_iam_role.openshift-instance-role.name}"]
policy_arn = "${aws_iam_policy.openshift-policy-forward-logs.arn}"
}
// Create a instance profile for the role.
resource "aws_iam_instance_profile" "negative4" {
name = "${var.name_tag_prefix}-openshift-instance-profile"
role = "${aws_iam_role.openshift-instance-role.name}"
}Test it
Ship it 🚢 and relax 🌴
IAM Policy Grants Full Permissions
IAM policies should not allow All ('*') in a statement action.
Rule-specific references:
Option A: Remove the All Specifier from the Resource Statement of the Policy
aws_iam_role_policy.policy.Statement.Resource
should not have a value of "*" (All).
Locate the following vulnerable pattern:
resource "aws_iam_user" "positive1" {
name = "${local.resource_prefix.value}-user"
force_destroy = true
tags = {
Name = "${local.resource_prefix.value}-user"
Environment = local.resource_prefix.value
}
}
resource "aws_iam_access_key" "positive2" {
user = aws_iam_user.user.name
}
resource "aws_iam_user_policy" "positive3" {
name = "excess_policy"
user = aws_iam_user.user.name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"s3:*",
"lambda:*",
"cloudwatch:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
output "username" {
value = aws_iam_user.user.name
}
output "secret" {
value = aws_iam_access_key.user.encrypted_secret
}Modify the config to something like the following:
resource "aws_iam_user" "negative1" {
name = "${local.resource_prefix.value}-user"
force_destroy = true
tags = {
Name = "${local.resource_prefix.value}-user"
Environment = local.resource_prefix.value
}
}
resource "aws_iam_access_key" "negative2" {
user = aws_iam_user.user.name
}
resource "aws_iam_user_policy" "negative3" {
name = "excess_policy"
user = aws_iam_user.user.name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"s3:*",
"lambda:*",
"cloudwatch:*"
],
"Effect": "Allow",
"Resource": "SomeResource"
}
]
}
EOF
}
output "username" {
value = aws_iam_user.user.name
}
output "secret" {
value = aws_iam_access_key.user.encrypted_secret
}Test it
Ship it 🚢 and relax 🌴
OSLogin Disabled
Verifies that the OSLogin is enabled.
Setting enable-oslogin
in project-wide metadata makes sure that all of the instances in your project are conforming to the specific value (true or false).
After you enable OS Login on one or more instances in your project, those VMs accept connections only from user accounts that have the necessary IAM roles in your project or organization.
Enabling Compute Engine OS Login for a project ensures that SSH keys used to access instances are mapped to IAM users. If access is revoked for an IAM user, associated SSH keys are revoked as well. This streamlines handling compromised SSH key pairs and the process for revoking access.
Rule-specific references:
Option A: Make sure Google Compute Project Metadata Enable OSLogin is not false or undefined
Locate google_compute_project_metadata.metadata
and set the enable-oslogin
property value to true.
Locate the following vulnerable pattern:
resource "google_compute_project_metadata" "positive1" {
metadata = {
enable-oslogin = false
}
}
resource "google_compute_project_metadata" "positive2" {
metadata = {
foo = "bar"
}
}Modify the config to something like the following:
resource "google_compute_project_metadata" "negative1" {
metadata = {
enable-oslogin = true
}
}Test it
Ship it 🚢 and relax 🌴
No Drop Capabilities for Containers
Sees if Kubernetes Drop
Capabilities exist to ensure the container's security context.
Rule-specific references:
- Drop Capabilities
- Docker Security - Quick Reference
- Holistic Info-Sec for Web Developers: Container Capabilities risks, countermeasures
Option A: Security Context Capabilities Drop must be set
The security_context.capabilities.drop
must have its value set, thus overriding the default of not dropping any capabilities.
Locate one of the following vulnerable patterns:
Adding but not dropping capabilities:
resource "kubernetes_pod" "test1" {
metadata {
name = "terraform-example"
}
spec {
container = [
{
image = "nginx:1.7.9"
name = "example"
security_context = {
capabilities = {
add = ["NET_BIND_SERVICE"]
}
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
},
{
image = "nginx:1.7.9"
name = "example2"
security_context = {
capabilities = {
drop = ["ALL"]
}
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
]
dns_config {
nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
searches = ["example.com"]
option {
name = "ndots"
value = 1
}
option {
name = "use-vc"
}
}
dns_policy = "None"
}
}No
security_context.capabilities.drop
:resource "kubernetes_pod" "test2" {
metadata {
name = "terraform-example"
}
spec {
container = [
{
image = "nginx:1.7.9"
name = "example"
security_context = {
allow_privilege_escalation = false
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
},
{
image = "nginx:1.7.9"
name = "example2"
security_context = {
capabilities = {
drop = ["ALL"]
}
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
]
dns_config {
nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
searches = ["example.com"]
option {
name = "ndots"
value = 1
}
option {
name = "use-vc"
}
}
dns_policy = "None"
}
}No
security_context
at all:resource "kubernetes_pod" "test3" {
metadata {
name = "terraform-example"
}
spec {
container = [
{
image = "nginx:1.7.9"
name = "example"
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
},
{
image = "nginx:1.7.9"
name = "example2"
security_context = {
capabilities = {
drop = ["ALL"]
}
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
]
dns_config {
nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
searches = ["example.com"]
option {
name = "ndots"
value = 1
}
option {
name = "use-vc"
}
}
dns_policy = "None"
}
}Modify the config to something like the following, thus dropping capabilities:
resource "kubernetes_pod" "negative4" {
metadata {
name = "terraform-example"
}
spec {
container = [
{
image = "nginx:1.7.9"
name = "example"
security_context = {
capabilities = {
drop = ["ALL"]
}
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
},
{
image = "nginx:1.7.9"
name = "example2"
security_context = {
capabilities = {
drop = ["ALL"]
}
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
]
dns_config {
nameservers = ["1.1.1.1", "8.8.8.8", "9.9.9.9"]
searches = ["example.com"]
option {
name = "ndots"
value = 1
}
option {
name = "use-vc"
}
}
dns_policy = "None"
}
}Test it
Ship it 🚢 and relax 🌴
Containers With Sys Admin Capabilities
Containers should not have CAP_SYS_ADMIN
Linux capability.
Rule-specific references:
Option A: Security Context Capabilities should not add Sys Admin
kubernetes_pod
spec.containers
[n].security_context.capabilities.add
should not include "SYS_ADMIN".
Locate the following vulnerable pattern:
resource "kubernetes_pod" "positive1" {
metadata {
name = "terraform-example"
}
spec {
container = [
{
image = "nginx:1.7.9"
name = "example"
security_context = {
capabilities = {
add = ["SYS_ADMIN"]
}
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
},
{
image = "nginx:1.7.9"
name = "example22222"
security_context = {
capabilities = {
add = ["SYS_ADMIN"]
}
}
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
]
// ...
}
}
resource "kubernetes_pod" "positive2" {
metadata {
name = "terraform-example"
}
spec {
container {
image = "nginx:1.7.9"
name = "example"
security_context {
capabilities {
add = ["SYS_ADMIN"]
}
}
env {
name = "environment"
value = "test"
}
port {
container_port = 8080
}
liveness_probe {
http_get {
path = "/nginx_status"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
// ...
}
}Modify the config to something like the following:
resource "kubernetes_pod" "negative3" {
metadata {
name = "terraform-example"
}
spec {
container = [
{
image = "nginx:1.7.9"
name = "example"
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
},
{
image = "nginx:1.7.9"
name = "example2"
env = {
name = "environment"
value = "test"
}
port = {
container_port = 8080
}
liveness_probe = {
http_get = {
path = "/nginx_status"
port = 80
http_header = {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
]
// ...
}
}
resource "kubernetes_pod" "negative4" {
metadata {
name = "terraform-example"
}
spec {
container {
image = "nginx:1.7.9"
name = "example"
env {
name = "environment"
value = "test"
}
port {
container_port = 8080
}
liveness_probe {
http_get {
path = "/nginx_status"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
// ...
}
}Test it
Ship it 🚢 and relax 🌴
Fixing Insecure Network Access
About Insecure Network Access
What is improper access control?
Improper access control is a vulnerability that occurs when a system does not properly restrict or enforce access to resources, such as files, directories, network resources, or application functions.
Examples of improper access control vulnerabilities include:
- Weak access controls: When access controls are weak or easily bypassed, attackers can gain access to sensitive resources or data by exploiting security weaknesses.
- Insufficient authorization checks: When authorization checks are insufficient, it can allow unauthorized users to access sensitive data or resources, or to perform actions that they are not authorized to do.
- Overly permissive access: When access controls are overly permissive, they can allow users to access resources or data that they do not need, increasing the risk of data breaches or other security incidents.
Check out these videos for a high-level explanation:
Missing function level access control
Missing object level access control
What is the impact of improper access control?
Improper access control can lead to various security threats, such as:
- Data breaches: Improper access control can allow attackers to access sensitive data, leading to data breaches, data loss, or unauthorized access to confidential information.
- Unauthorized access to resources: Attackers can exploit improper access control to gain unauthorized access to resources, such as servers, databases, and applications.
- Account takeover: Attackers can use improper access control to take over user accounts and gain access to sensitive data or resources.
How to prevent improper access control?
Here are some measures that can help ensure proper access control:
- Strong access controls: Implement strong access controls that restrict access to sensitive resources or data based on user roles and permissions.
- Proper user authentication and authorization: Implement proper user authentication and authorization mechanisms to ensure that only authorized users can access sensitive data and resources.
- Input validation and sanitization: Validate and sanitize user input before using it to access internal objects or data. Use regular expressions or input filters to remove or encode any special characters that could be used to access sensitive data or resources.
- Least privilege: Use the principle of least privilege to restrict access to resources to only what is necessary for each user role. This can help prevent attackers from gaining access to resources that they do not need to access.
- Regular security audits: Regularly audit your system for security vulnerabilities, including improper access control vulnerabilities. Use automated tools and manual testing to identify potential issues and fix them before they can be exploited.
References
Taxonomies
- OWASP Top 10 - A01 Broken Access Control
- CWE-284: Improper Access Control
- CWE-285: Improper Authorization
Explanation & Prevention
- OWASP: Broken Access Control
- OWASP: Authorization Testing
- OWASP: ASVS - V4 Access Control
- OWASP: Proactive Controls - C7 Enforce Access Controls
- OWASP: Authorization Cheat Sheet
Related CVEs
Training
In the context of Terraform, this vulnerability class identifies vulnerabilities related to services being exposed publicly.
BigQuery Dataset Is Public
BigQuery dataset is anonymously or publicly accessible.
Rule-specific references:
Option A: Replace Access Special Group AllAuthenticatedUsers with a more restrictive option
access.special_group
should not have "allAuthenticatedUsers" assigned.
Locate the following vulnerable pattern:
resource "google_bigquery_dataset" "vulnerable" {
dataset_id = "example_dataset"
friendly_name = "test"
description = "This is a test description"
location = "EU"
default_table_expiration_ms = 3600000
labels = {
env = "default"
}
access {
role = "OWNER"
special_group = "allAuthenticatedUsers"
}
}Modify the config to something like the following:
resource "google_bigquery_dataset" "not_vulnerable" {
dataset_id = "example_dataset"
friendly_name = "test"
description = "This is a test description"
location = "EU"
default_table_expiration_ms = 3600000
labels = {
env = "default"
}
access {
role = "OWNER"
user_by_email = google_service_account.bqowner.email
}
}Test it
Ship it 🚢 and relax 🌴
Cloud Storage Bucket Is Publicly Accessible
Cloud Storage Bucket is anonymously or publicly accessible.
Rule-specific references:
Option A: Remove All members
None of the member
/members
should have a value containing "allUsers" or "allAuthenticatedUsers".
Locate the following vulnerable pattern:
resource "google_storage_bucket_iam_member" "positive1" {
bucket = google_storage_bucket.default.name
role = "roles/storage.admin"
member = "allUsers"
}
resource "google_storage_bucket_iam_member" "positive2" {
bucket = google_storage_bucket.default.name
role = "roles/storage.admin"
members = ["user:[email protected]","allAuthenticatedUsers"]
}Modify the config to something like the following:
resource "google_storage_bucket_iam_member" "negative1" {
bucket = google_storage_bucket.default.name
role = "roles/storage.admin"
member = "user:[email protected]"
}
resource "google_storage_bucket_iam_member" "negative2" {
bucket = google_storage_bucket.default.name
role = "roles/storage.admin"
members = ["user:[email protected]","user:[email protected]"]
}Test it
Ship it 🚢 and relax 🌴
EC2 Instance Using Default Security Group
EC2 instances should not use default security group(s) (Security Group(s) with a name of "default").
Rule-specific references:
Option A: Remove any Default Security Groups
Remove any default
security groups from the configuration.
Add non-default security groups if needed.
Locate the following vulnerable pattern:
resource "aws_instance" "positive1" {
ami = "ami-003634241a8fcdec0"
instance_type = "t3.micro"
// ...
security_groups = [aws_security_group.default.id]
}Or:
resource "aws_instance" "positive2" {
ami = "ami-003634241a8fcdec0"
instance_type = "t2.micro"
// ...
vpc_security_group_ids = [aws_security_group.default.id]
}Modify the config to something like the following:
resource "aws_instance" "negative1" {
ami = "ami-003634241a8fcdec0"
instance_type = "t3.micro"
// ...
security_groups = [aws_security_group.sg.id]
}Or:
resource "aws_instance" "negative2" {
ami = "ami-003634241a8fcdec0"
instance_type = "t2.micro"
// ...
vpc_security_group_ids = [aws_security_group.sg.id]
}Test it
Ship it 🚢 and relax 🌴
Limit Access to AWS Resources
Option A: Ensure sensitive resources are not public
In the context of Terraform, when a specific resource is marked as publicly accessible, it means that attackers may be able to interact with it. Resources that are identified include the following types:
- aws_db_instance
- aws_dms_replication_instance
- aws_rds_cluster_instance
- aws_redshift_cluster
Follow the steps below:
Go through the issues that GuardRails identified in the PR/MR
Review the affected resources to determine whether they can be public
resource "aws_db_instance" "insecure" {
# ... other configuration ...
publicly_accessible = true
}If not, then either remove the
publicly_accessible
argument or change it topublicly_accessible = false
Test the changes and ensure that everything is working as expected
Ship it 🚢 and relax 🌴
Option B: Ensure inbound traffic on AWS is restricted
AWS Security Groups can be configured to allow all incoming traffic, which is in violation of the security best practices.
Go through the issues that GuardRails identified in the PR/MR
Review the
aws_security_group
oraws_security_group_rule
resources wherecidr_blocks
contain/0
resource "aws_security_group" "allow_tls" {
name = "allow_tls"
description = "Allow TLS inbound traffic"
vpc_id = aws_vpc.main.id
ingress {
description = "TLS from VPC"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}Ensure that the
cidr_blocks
are limited to required ports and IP address ranges
Option C: Ensure inbound traffic on Azure is restricted
Azure Network Security Groups can be configured to allow all incoming traffic, which is in violation of the security best practices.
Go through the issues that GuardRails identified in the PR/MR
Review the
azurerm_network_security_rule
resources wheresource_address_prefix
contain/0
or*
resource "azurerm_network_security_rule" "example" {
name = "test123"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = azurerm_resource_group.example.name
network_security_group_name = azurerm_network_security_group.example.name
}Ensure that the
source_address_prefix
are limited to required ports and IP address ranges
Option D: Ensure inbound traffic on GCP is restricted
GCP firewalls can be configured to allow all incoming traffic, which is in violation of the security best practices.
Go through the issues that GuardRails identified in the PR/MR
Review the
google_compute_firewall
resources wheresource_ranges
contain/0
resource "google_compute_firewall" "project-firewall-allow-ssh" {
name = "${var.vpc_name}-allow-something"
network = "${google_compute_network.project-network.self_link}"
....
source_ranges = ["0.0.0.0/0"]
}Ensure that the
source_ranges
is limited to required ports and IP address ranges
Network ACL With Unrestricted Access To RDP
RDP (TCP:3389) should not be public in an AWS Network ACL.
Rule-specific references:
Option A: Make sure that RDP port 3389 is not accessible to the world via Network ACL
RDP port (3389) ingress should not be accessible to the world (0.0.0.0/0).
Locate one of the following vulnerable patterns or a pattern where the RDP port ingress is open to the world:
Vulnerable Pattern 1:
provider "aws" {
region = "us-east-1"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
resource "aws_network_acl" "positive1" {
vpc_id = aws_vpc.main.id
egress = [
{
protocol = "tcp"
rule_no = 200
action = "allow"
cidr_block = "10.3.0.0/18"
from_port = 443
to_port = 443
}
]
ingress = [
{
protocol = "tcp"
rule_no = 100
action = "allow"
cidr_block = "0.0.0.0/0"
from_port = 3389
to_port = 3389
}
]
tags = {
Name = "main"
}
}Vulnerable Pattern 2:
provider "aws" {
region = "us-east-1"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
resource "aws_network_acl" "positive2" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main"
}
}
resource "aws_network_acl_rule" "postive2" {
network_acl_id = aws_network_acl.positive2.id
rule_number = 100
egress = false
protocol = "tcp"
rule_action = "allow"
from_port = 3389
to_port = 3389
cidr_block = "0.0.0.0/0"
}Modify the config to something like the following:
Replacement Pattern 1:
provider "aws" {
region = "us-east-1"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
resource "aws_network_acl" "negative1" {
vpc_id = aws_vpc.main.id
egress = [
{
protocol = "tcp"
rule_no = 200
action = "allow"
cidr_block = "10.3.0.0/18"
from_port = 443
to_port = 443
}
]
ingress = [
{
protocol = "tcp"
rule_no = 100
action = "allow"
cidr_block = "10.3.0.0/18"
from_port = 3389
to_port = 3389
}
]
tags = {
Name = "main"
}
}Replacement Pattern 2:
provider "aws" {
region = "us-east-1"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
resource "aws_network_acl" "negative2" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main"
}
}
resource "aws_network_acl_rule" "negative2" {
network_acl_id = aws_network_acl.negative2.id
rule_number = 100
egress = false
protocol = "tcp"
rule_action = "allow"
from_port = 3389
to_port = 3389
cidr_block = "10.3.0.0/18"
}More Replacement Patterns:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.7.0"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = "dev"
}
}module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.7.0"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
default_network_acl_ingress = [
{
"action" : "allow",
"cidr_block" : "0.0.0.0/0",
"from_port" : 0,
"protocol" : "-1",
"rule_no" : 100,
"to_port" : 0
},
{
"action" : "allow",
"cidr_block" : "10.3.0.0/18",
"from_port" : 0,
"protocol" : "-1",
"rule_no" : 3389,
"to_port" : 0
}
]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = "dev"
}
}Test it
Although the (Modify the config to something like the following) patterns that have been presented above will pass this rule a much better option if possible is to remove the RDP port and use a VPN or SSH tunnel to pass all RDP data
Ship it 🚢 and relax 🌴
RDP Access Is Not Restricted
Check if the Google compute firewall allows unrestricted RDP access (port 3389):
allow.protocol
with value "tcp" or "udp"allow.ports
contains port 3389 somewhere in its rangesource_ranges
contains one or both of "0.0.0.0/0" (IPv4 all) | "::/0" (IPv6 all)
Rule-specific references:
Option A: Remove Google Cloud Platform (GCP) Compute Firewall Rule Allowing Unrestricted RDP Ingress Traffic
There should not be a google_compute_firewall
resource with direction
property value set or defaulting to "INGRESS" allow
ing ports
"3389" with unrestricted source_ranges
"0.0.0.0/0" (IPv4 all) | "::/0" (IPv6 all) and protocol
with value "tcp" or "udp".
Locate the following vulnerable pattern:
resource "google_compute_firewall" "positive1" {
name = "test-firewall"
network = google_compute_network.default.name
direction = "INGRESS"
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80", "8080", "1000-2000","3389"]
}
source_tags = ["web"]
source_ranges = ["0.0.0.0/0"]
}
resource "google_compute_firewall" "positive2" {
name = "test-firewall"
network = google_compute_network.default.name
allow {
protocol = "udp"
ports = ["80", "8080", "1000-2000","21-3390"]
}
source_tags = ["web"]
source_ranges = ["::/0"]
}Modify the config to something like the following:
resource "google_compute_firewall" "negative1" {
name = "test-firewall"
network = google_compute_network.default.name
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80", "8080", "1000-2000"]
}
source_tags = ["web"]
}Test it
Ship it 🚢 and relax 🌴
RDP Is Exposed To The Internet
Port 3389 (Remote Desktop) should not be exposed to the internet.
Rule-specific references:
Option A: Make sure that RDP is not exposed to the world
The following properties of azurerm_network_security_rule
and their values as specified should not be combined:
destination_port_range
must not allow RDP port 3389protocol
must not be "UDP" or "TCP"access
must not be "Allow"source_address_prefix
: must now allow all (all can come in several forms)
Locate the following vulnerable patterns:
Vulnerable Pattern 1:
resource "azurerm_network_security_rule" "positive1" {
name = "example"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_range = "3389"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = azurerm_resource_group.example.name
network_security_group_name = azurerm_network_security_group.example.name
}Vulnerable Pattern 2:
resource "azurerm_network_security_rule" "positive2" {
name = "example"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_range = "3389-3390"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = azurerm_resource_group.example.name
network_security_group_name = azurerm_network_security_group.example.name
}Vulnerable Pattern 3:
resource "azurerm_network_security_rule" "positive3" {
name = "example"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_range = "3388-3389"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = azurerm_resource_group.example.name
network_security_group_name = azurerm_network_security_group.example.name
}Vulnerable Pattern 4:
resource "azurerm_network_security_rule" "positive4" {
name = "example"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_range = "3389"
source_address_prefix = "0.0.0.0"
destination_address_prefix = "*"
resource_group_name = azurerm_resource_group.example.name
network_security_group_name = azurerm_network_security_group.example.name
}Vulnerable Pattern 5:
resource "azurerm_network_security_rule" "positive5" {
name = "example"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_range = "3389,3391"
source_address_prefix = "34.15.11.3/0"
destination_address_prefix = "*"
resource_group_name = azurerm_resource_group.example.name
network_security_group_name = azurerm_network_security_group.example.name
}Vulnerable Pattern 6:
resource "azurerm_network_security_rule" "positive6" {
name = "example"
priority =