Cloud Security Jobs

Tyler Wall
22 min readFeb 23, 2025

Part One

In this blog post we’re going to get hands-on with Cloud Security. One of the biggest challenges that people face is that they can’t get a job in Cloud Security because they don’t have experience, and since they don’t have experience they can’t get a job. This series will focus on creating a freelancing gig for a cloud security job.

Cloud computing has grown leaps and bounds in the last decade and most if not all companies are migrating to one of the big three players in the Cloud: AWS, Azure, and GCP. While most companies operate in a multi-cloud approach, meaning they are operating in two or more of the big three, we will be focusing specifically on Azure in these labs.

I am an advocate of the Microsoft Cloud and I feel its the safest bet for your career as most large enterprise have an Active Directory infrastructure and it makes the most sense for those companies to move into the Azure cloud. I am betting my future that Azure will dominate the cloud market by the end of 2020s.

Microsoft has a holistic solution for not only managing infrastructure in the cloud but their cloud security products aren’t too shabby. I enjoy using the Defender suite of products and I know for a fact they’re being widely adopted everywhere and they will be the standard security tooling in the future for many, many large enterprises.

By the end of this series, you will be able to say you have experience with deploying and managing Azure infrastructure as code, scanning infrastructure code for misconfigurations, using open source tools to scan your Azure environment against security best practices.

Cloud Security certifications are important but what’s more important is that you have hands-on experience with the Cloud and understand why the certification bodies think this information is important to know. BELIEVE ME, it won’t make sense completely by just studying for an exam. You have to do it for yourself for it to click. At least, that’s how it was for me. And then you can put on your resume REAL experience that you’ve gained and will work for you as you apply for your next job, or you can create Fiverr or Upwork services to conduct independent assessments for small-to-medium sized businesses. I am excited to start this journey with you guys, and if you didn’t already complete the lab posted yesterday for the honeypot project, then your first task is to sign up and get your free credits from Azure. The credits are valid for a month and I hope to have this wrapped up before they expire but no promises!

Part Two

The first thing that we will be covering in this series is what is infrastructure as code and why is it important?

Infrastructure as Code (IaC) is about using code to manage the computing infrastructure in the cloud rather than pointing and clicking and using the GUI. This includes things like operating systems, databases, and storage to name a few. Traditionally, we had to spend lots of time setting up and maintaining infrastructure… going through lengthy processes when we wanted to create something new or delete entire environments. With IaC, you can define what you want your infrastructure to look like with code without worrying about all the detailed steps to get there. For instance, you can just say that you want a Debian server with 12gb of ram and 80gb of hard drive space and it figures out everything it needs to do to make that happen.

Benefits of Infrastructure as Code

Automation is a key goal in computing, and IaC is a way to automate infrastructure management.

There are several benefits of using IaC and one of this is easy environment duplication. You can use the same IaC to deploy an environment in one location that you do in another. If a business has IaC describing its entire regional branch’s environment, including servers and networking, they can just copy and paste the code then execute it again to set up a new branch location.

Another benefit to using IaC is reduced configuration errors. Manual configurations are error-prone due to human mistakes so having it automated with IaC it reduces the error. It also makes error checking more streamlined. Later in this course we will be using tools to check IaC configurations for issues, but for now, just know you can take the piece of IaC code and evaluate it for misconfigurations before you actually deploy it.

The last benefit I want to cover for IaC is the ability to build and branch on environments easily. For instance, if a new feature like a machine learning module is invented, developers can branch the IaC to deploy and test it without affecting the main application.

How does IaC work?

IaC works by describing a system’s architecture and functionality, just like software code describes an application. It uses configuration files treated like source code to manage virtualized resources in the cloud. These configuration files can be maintained under source control and part of the overall codebase.

Immutable vs. Mutable Infrastructure

There are two approaches to IaC: mutable and immutable infrastructure.

In mutable infrastructure, components are changed in production while the service continues to operate normally.

With immutable infrastructure, components and are set and assembled to create a full service or application. If any change is required, the entire set of components has to be deleted and redeployed fully to be updated.

Approaches to IaC

There are two basic approaches to IaC: declarative and imperative.

Declarative describes the desired end state of a system, and the IaC solution creates it accordingly. Its simple to use if the developer knows what components and settings are needed.

Imperative describes all the steps to set up resources to reach the desired running state. It’s more complex but necessary for intricate infrastructure deployments where the order of events matter.

Terraform IaC

An open-source tool, Terraform, takes an immutable declarative approach and uses its own language Hashicorp Configuration Language (HCL). HCL is based on Go and is considered one of the easiest languages to pick up for IaC. I have the Terraform Associate certification and it took me all of three days to pick up the language. By the end of these labs, I’d highly suggest you picking up a study guide for the exam since you’ll already be 2/3rds of the way there.

With Terraform, you can use the same configuration for multiple cloud providers. And since many organizations today opt for the hybrid cloud model, Terraform can easily be called the most popular IaC tool.

Terraform is capable of both provisioning and configuration management, but it’s inherently a provisioning tool that uses cloud provider APIs to manage required resources. And since it natively and easily handles the orchestration of new infrastructure, it’s more equipped to build immutable infrastructures, where you have to replace components fully to make changes.

Terraform uses state files to manage infrastructure resources and track changes. State files record everything Terraform builds, so you can easily refer to them. We’ll get more into this later.

Often considered an obvious choice for an IaC tool, Terraform is what we will be using in this course. So let’s get started.

Part Three

We first need to install Terraform and then we will continue with completing our very first Terraform lifecycle. Follow along in these two videos as we install Terraform to both a Mac and Windows then proceed with the instructions.

Installing Terraform to Windows

curl.exe -O https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_windows_amd64.zip

Expand-Archive terraform_0.12.26_windows_amd64.zip

Rename-Item -path .\terraform_0.12.26_windows_amd64\ .\terraform

Installing Terraform to Mac

brew install terraform
terraform -install-autocomplete

Running your first Terraform

With Terraform there is a lifecycle for a resource and it can be broken down into four phases: Init, Plan, Apply, and Destroy.

  • init — Init. Initialize the (local) Terraform environment. Usually executed only once per session.
  • plan — Plan. Compare the Terraform state with the as-is state in the cloud, build and display an execution plan. This does not change the deployment (read-only).
  • apply — Apply the plan from the plan phase. This potentially changes the deployment (read and write).
  • destroy — Destroy all resources that are governed by this specific terraform environment.

This article assumes that you have created an Azure account and subscription. The first thing we will do is install the Azure CLI tools and configure it to be used with terraform.

The Azure CLI Tool installed

Install the Azure CLI tool with brew in MacOSX:

brew update && brew install azure-cli

To install the Azure CLI using PowerShell in Windows, start PowerShell as administrator and run the following command:

$ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; Remove-Item .\AzureCLI.msi

You can now run the Azure CLI with the az command from either Windows Command Prompt, PowerShell, or Mac Terminal.

You will use the Azure CLI tool to authenticate with Azure. Terraform must authenticate to Azure to create infrastructure. In your terminal, use the Azure CLI tool to set up your account permissions locally.

az login

You now have logged in using your account you created in previous lectures. In the output in the terminal, find the ID of the subscription that you want to use:

{      
"cloudName": "AzureCloud",
"homeTenantId": "0envbwi39-home-Tenant-Id",
"id": "35akss-subscription-id",
"isDefault": true,
"managedByTenants": [],
"name": "Subscription-Name",
"state": "Enabled",
"tenantId": "0envbwi39-TenantId",
"user":
{
"name": "your-username@domain.com",
"type": "user"
}
}

Once you have chosen the account subscription ID, set the account with the Azure CLI.

az account set --subscription "35akss-subscription-id

Next, we create a Service Principal. A Service Principal is an application within Azure Active Directory with the authentication tokens Terraform needs to perform actions on your behalf. Update the <SUBSCRIPTION_ID> with the subscription ID you specified in the previous step.

az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<SUBSCRIPTION_ID>

The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see the assignment details

{  
"appId": "xxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx",
"displayName": "azure-cli-2022-xxxx",
"password": "xxxxxx~xxxxxx~xxxxx",
"tenant": "xxxxx-xxxx-xxxxx-xxxx-xxxxx"
}

Next you need to set your environment variables. HashiCorp recommends setting these values as environment variables rather than saving them in your Terraform configuration. Open a Mac terminal or PowerShell and input the values that were outputted from the previous command. Subscription ID we got from the previous step.

For Mac Terminal
export ARM_CLIENT_ID="<APPID_VALUE>"
export ARM_CLIENT_SECRET="<PASSWORD_VALUE>"
export ARM_SUBSCRIPTION_ID="<SUBSCRIPTION_ID>"
export ARM_TENANT_ID="<TENANT_VALUE>"
For PowerShell
$env:ARM_CLIENT_ID = "APPID_VALUE"
$env:ARM_CLIENT_SECRET = "PASSWORD_VALUE"
$env:ARM_TENANT_ID = "TENANT_VALUE"
$env:ARM_SUBSCRIPTION_ID = "SUBSCRIPTION_ID"

Install Visual Studio Code and Setup Environment

Great! We are all configured to use Azure now. Now the next thing we are going to do is open up a terminal install Visual Studio Code by issuing this command on a Mac:

brew install visual-studio-code

Or on a Windows machine navigating to this URL to download.

Next, in the terminal on Mac we will issue the following commands to create a directory that will contain our Terraform configuration:

mkdir ~/tf-exercise-1
cd ~/tf-exercise-1

And open up a file for main.tf

code main.tf

On Windows create a folder anywhere called “tf-exercise-1” and create a new file called “main” with the file extension “.tf” and open that file with Visual Studio Code

Now we need to write configuration to create a new resource group. Copy and paste the code snippet into the “main.tf” file

# Configure the Azure provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}

required_version = ">= 1.1.0"
}

provider "azurerm" {
features {}
}

resource "azurerm_resource_group" "rg" {
name = "myTFResourceGroup"
location = "westus2"
}

Note: The location of your resource group is hardcoded in this example. If you do not have access to the resource group location westus2, update the main.tf file with your Azure region. This is a complete configuration that Terraform can apply. In the following sections we will review each block of the configuration in more detail.

Terraform Block

The terraform {} block contains Terraform settings, including the required providers Terraform will use to provision your infrastructure. For each provider, the source attribute defines an optional hostname, a namespace, and the provider type. Terraform installs providers from the Terraform Registry by default. In this example configuration, the azurerm provider’s source is defined as hashicorp/azurerm, which is shorthand for registry.terraform.io/hashicorp/azurerm.

You can also define a version constraint for each provider in the required_providers block.

The version attribute is optional, but we recommend using it to enforce the provider version. Without it, Terraform will always use the latest version of the provider, which may introduce breaking changes.

Providers

The provider block configures the specified provider, in this case azurerm. A provider is a plugin that Terraform uses to create and manage your resources. You can define multiple provider blocks in a Terraform configuration to manage resources from different providers.

Resource

Use resource blocks to define components of your infrastructure. A resource might be a physical component such as a server, or it can be a logical resource such as a Heroku application.

Resource blocks have two strings before the block: the resource type and the resource name. In this example, the resource type is azurerm_resource_group and the name is rg.

The prefix of the type maps to the name of the provider. In the example configuration, Terraform manages the azurerm_resource_group resource with the azurerm provider.

Together, the resource type and resource name form a unique ID for the resource. For example, the ID for your network is azurerm_resource_group.rg.

Resource blocks contain arguments which you use to configure the resource. The Azure provider documentation documents supported resources and their configuration options, including azurerm_resource_group and its supported arguments.

Initialize your Terraform configuration

Initialize your learn-terraform-azure directory in your terminal. The terraform commands will work with any operating system. Your output should look similar to this one:

terraform init
Initializing the backend...Initializing provider plugins...
- Finding hashicorp/azurerm versions matching "~> 3.0.2"...
- Installing hashicorp/azurerm v3.0.2...
- Installed hashicorp/azurerm v3.0.2 (signed by HashiCorp)
Terraform has been successfully initialized!

You may now begin working with Terraform. Try running “terraform plan” to see any changes that are required for your infrastructure.

All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

Format and validate the configuration

We recommend using consistent formatting in all of your configuration files. The terraform fmt command automatically updates configurations in the current directory for readability and consistency.

Format your configuration. Terraform will print out the names of the files it modified, if any. In this case, your configuration file was already formatted correctly, so Terraform won’t return any file names.

terraform fmt

You can also make sure your configuration is syntactically valid and internally consistent by using the terraform validate command. The example configuration provided above is valid, so Terraform will return a success message.

terraform validate

Success! The configuration is valid.

Apply your Terraform Configuration

Run the terraform apply command to apply your configuration. This output shows the execution plan and will prompt you for approval before proceeding. If anything in the plan seems incorrect or dangerous, it is safe to abort here with no changes made to your infrastructure. Type yes at the confirmation prompt to proceed.

terraform apply
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols:

+ create

Terraform will perform the action of creating a resource group:

azurerm_resource_group.rg will be created
+ resource "azurerm_resource_group" "rg" {
+ id = (known after apply)
+ location = "westus2"
+ name = "myTFResourceGroup"
}

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes
azurerm_resource_group.rg: Creating...
azurerm_resource_group.rg: Creation complete after 1s [id=/subscriptions/c9ed8610-47a3-4107-a2b2-a322114dfb29/resourceGroups/myTFResourceGroup]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Navigate to the Azure portal in your web browser to check to make sure the resource group was created.

Inspect your state

When you apply your configuration, Terraform writes data into a file called terraform.tfstate. This file contains the IDs and properties of the resources Terraform created so that it can manage or destroy those resources going forward. Your state file contains all of the data in your configuration and could also contain sensitive values in plaintext, so do not share it or check it in to source control.

Inspect the current state using terraform show.

terraform show
azurerm_resource_group.rg:
resource "azurerm_resource_group" "rg" {

id = "/subscriptions/c9ed8610-47a3-4107-a2b2-a322114dfb29/resourceGroups/myTFResourceGroup"
location = "westus2"
name = "myTFResourceGroup"

}

When Terraform created this resource group, it also gathered the resource’s properties and meta-data. These values can be referenced to configure other resources or outputs.

To review the information in your state file, use the state command. If you have a long state file, you can see a list of the resources you created with Terraform by using the list subcommand.

terraform state list
azurerm_resource_group.rg

If you run terraform state, you will see a full list of available commands to view and manipulate the configuration’s state.

terraform state
Usage: terraform state <subcommand> [options] [args]

This command has subcommands for advanced state management. These subcommands can be used to slice and dice the Terraform state. This is sometimes necessary in advanced cases. For your safety, all state management commands that modify the state create a timestamped backup of the state prior to making modifications.

The structure and output of the commands is specifically tailored to work well with the common Unix utilities such as grep, awk, etc. We recommend using those tools to perform more advanced state tasks.

Terraform Destroy

Lastly, issue the terraform destroy command to complete the lifecycle and undo the changes that you made. Terraform keeps a state of the changes you made in the terraform state file so it knows exactly which ones to undo.

terraform destroy
# azurerm_resource_group.rg will be destroyed
resource "azurerm_resource_group" "rg" {

id = "/subscriptions/b7b18fdb-6e24-4934-a25e-2957c9e62d05/resourceGroups/myTFResourceGroup" -> null
location = "westus2" -> null
name = "myTFResourceGroup" -> null
tags = {} -> null

}
Plan: 0 to add, 0 to change, 1 to destroy.Do you really want to destroy all resources?

You have now completed your very first terraform lifecycle. Congratulations! It’s fairly simple, the configuration files get more complex from here but the steps and lifecycle remain the same. We just created a resource group in Azure, but we will continue the terraform exercises by doing something a little more complex and deploying a honeypot using terraform.

Part Four

In this lab we are going to continue our Terraform exercises by deploying a honeypot via Terraform. If you have been following along, previously on this blog I had you installed a T-Pot manually using the GUI in Azure. There’s a much easier way to do this, so let’s get rollin’.

Create the Terraform Configuration File

First, in the terminal on Mac we will issue the following commands to create a directory that will contain our Terraform configuration:

mkdir ~/tpot
cd ~/tpot

And open up a file for main.tf

code main.tf

On Windows create a folder anywhere called “tpot” and create a new file called “main” with the file extension “.tf” and open that file with Visual Studio Code

Now we need to write configuration to create a few new resources. Copy and paste the code snippet into the “main.tf” file

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.90.0"
}
}
}
provider "azurerm" {
# Configuration options
features {

}
}
variable "prefix" {
default = "tpot"
}
resource "azurerm_resource_group" "tpot-rg" {
name = "${var.prefix}-resources"
location = "East US"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.tpot-rg.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_virtual_machine" "main" {
depends_on = [ azurerm_resource_group.tpot-rg ]
name = "${var.prefix}-vm"
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
network_interface_ids = [azurerm_network_interface.tpot-vm-nic.id]
vm_size = "Standard_A2m_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "canonical"
offer = "ubuntu-24_04-lts"
sku = "minimal-gen1"
version = "latest"
}
storage_os_disk {
name = "tpot-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "azureuser"
admin_password = "CyberNOW!"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
# Create Security Group to access linux
resource "azurerm_network_security_group" "tpot-nsg" {
depends_on=[azurerm_resource_group.tpot-rg]
name = "linux-vm-nsg"
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
security_rule {
name = "AllowALL"
description = "AllowALL"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "Internet"
destination_address_prefix = "*"
}
security_rule {
name = "AllowSSH"
description = "Allow SSH"
priority = 150
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "Internet"
destination_address_prefix = "*"
}
}
# Associate the linux NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "tpot-vm-nsg-association" {
depends_on=[azurerm_resource_group.tpot-rg]
subnet_id = azurerm_subnet.internal.id
network_security_group_id = azurerm_network_security_group.tpot-nsg.id
}
# Get a Static Public IP
resource "azurerm_public_ip" "tpot-vm-ip" {
depends_on=[azurerm_resource_group.tpot-rg]
name = "tpot-vm-ip"
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
allocation_method = "Static"
}
# Create Network Card for linux VM
resource "azurerm_network_interface" "tpot-vm-nic" {
depends_on=[azurerm_resource_group.tpot-rg]
name = "tpot-vm-nic"
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.tpot-vm-ip.id
}
}
output "public_ip" {
value = azurerm_public_ip.tpot-vm-ip.ip_address
}

Something I’m just going to note here because it’s difficult information to find, is if you want to find the SKU of a particular image you can search for it like this syntax:

az vm image list --publisher Canonical --sku gen1 --output table --all

Type az login in the terminal to establish your credentials

az login

Initialize the directory

terraform init

Now terraform plan

terraform plan

Note: Take a look at the Terraform Plan and see the 8 resources that we are creating. While not mandatory, it’s good practice to ‘Terraform Plan’ to review your changes BEFORE deploying.

Now terraform apply

terraform apply

It will output the public IP address. Just SSH into it with the credentials

(ssh azureuser@<ipaddress>)

Username: azureuser
Password: CyberNOW!

And install the honeypot

env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/install.sh)"
Select "Hive" installsudo reboot (when finished)

Note: The installation script changes the port to SSH on, so if you want to ssh to it you have to use this syntax “ssh azureuser@<ip address> -p 64295”

You can now login to the honeypot web interface via

https://<ipaddress>:64297

See how much easier this is than configuring it manually? This blog series won’t go into detail about how to create a Terraform for scratch, but at this point you understand the basic Terraform lifecycle and understand its application and what its used for.

I recommend now picking up a Udemy course on the Terraform Associate exam and spend the next couple of days studying for the exam. The Terraform Associate exam itself isn’t very costly, and makes great wall art.

When you are finished with the Tpot, make sure you aren’t charged anything further and use the “terraform destroy” command to remove everything you did in one swoop. Easy peasy. Join us next in this series as conduct automated scans of Terraform files for configuration issues using the open source tool Checkov.

Part Five

Checkov is a static code analysis tool for scanning infrastructure as code (IaC) files for misconfigurations that may lead to security or compliance problems. Checkov includes more than 750 predefined policies to check for common misconfiguration issues. Checkov also supports the creation and contribution of custom policies.

Supported IaC types

Checkov scans these IaC file types:

  • Terraform (for AWS, GCP, Azure and OCI)
  • CloudFormation (including AWS SAM)
  • Azure Resource Manager (ARM)
  • Serverless framework
  • Helm charts
  • Kubernetes
  • Docker

This lab shows how to install Checkov, run a scan, and analyze the results.

Install Pip3 and Python

pip3 is the official package manager and pip command for Python 3. It enables the installation and management of third party software packages with features and functionality not found in the Python standard library. Pip3 installs packages from PyPI (Python Package Index).

You can get it by installing the latest version of python here.

Install Checkov From PyPI Using Pip

pip3 install checkov

Make Terraform Directory and Move There

mkdir ~/checkov-example
cd ~/checkov-example

Create main.tf file with VS Code

code main.tf

Paste Code into File, Save, then Exit

resource "aws_s3_bucket" "foo-bucket" {
# same resource configuration as previous example, but acl set for public access.
acl = "public-read"
}
data "aws_caller_identity" "current" {}

Format the file

terraform fmt

Execute Checkov

Make sure you’re in the directory that your Terraform is in.

checkov -f main.tf

Results

It’s that simple. As you can see Checkov runs and it notes that there were 8 failed checks including Public read access enabled. If you click on the link it will take you to a guide which explains the failure in more details and teaches you how to fix it.

Checkov checks for all common configuration and security errors in your Terraform code BEFORE deploying it. Anytime you download a Terraform script to execute in your environment, you will want to run Checkov to make sure that it meets your standards for configuration.

In the next blog, wrapping up this series, we will be checking a Terraform configuration file for issues with Checkov, deploying it to Azure, and using the open source tool Prowler to perform a security best practices assessment of your Azure environment. The report generated can be used to present to small and medium sized businesses with your recommendations for remediation.

You will now be able to create a gig on Fiverr or Upwork or the likes and conduct low-cost cloud security assessments and remember to continue your education to pass the Terraform Associate exam.

Are you ready to wrap this up? In Azure Infrastructure as Code — Part Six we are going to put everything together and generate a report that can be presented to small to medium sized businesses on their cloud security posture. First, we are going to be analyzing Terraform code with Checkov. So let’s do that.

Make Terraform Directory and Move There

mkdir ~/wrappingup
cd ~/wrappingup

Create main.tf file with VS Code

code main.tf

Paste Code into File, and Save

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.90.0"
}
}
}
provider "azurerm" {
# Configuration options
features {

}
}
variable "prefix" {
default = "tpot"
}
resource "azurerm_resource_group" "tpot-rg" {
name = "${var.prefix}-resources"
location = "East US"
}
resource "azurerm_virtual_network" "main" {
name = "${var.prefix}-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
}
resource "azurerm_subnet" "internal" {
name = "internal"
resource_group_name = azurerm_resource_group.tpot-rg.name
virtual_network_name = azurerm_virtual_network.main.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_virtual_machine" "main" {
depends_on = [ azurerm_resource_group.tpot-rg ]
name = "${var.prefix}-vm"
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
network_interface_ids = [azurerm_network_interface.tpot-vm-nic.id]
vm_size = "Standard_A2m_v2"
# Uncomment this line to delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true
# Uncomment this line to delete the data disks automatically when deleting the VM
delete_data_disks_on_termination = true
storage_image_reference {
publisher = "canonical"
offer = "ubuntu-24_04-lts"
sku = "minimal-gen1"
version = "latest"
}
storage_os_disk {
name = "tpot-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "azureuser"
admin_password = "CyberNOW!"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
# Create Security Group to access linux
resource "azurerm_network_security_group" "tpot-nsg" {
depends_on=[azurerm_resource_group.tpot-rg]
name = "linux-vm-nsg"
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
security_rule {
name = "AllowALL"
description = "AllowALL"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "Internet"
destination_address_prefix = "*"
}
security_rule {
name = "AllowSSH"
description = "Allow SSH"
priority = 150
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "Internet"
destination_address_prefix = "*"
}
}
# Associate the linux NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "tpot-vm-nsg-association" {
depends_on=[azurerm_resource_group.tpot-rg]
subnet_id = azurerm_subnet.internal.id
network_security_group_id = azurerm_network_security_group.tpot-nsg.id
}
# Get a Static Public IP
resource "azurerm_public_ip" "tpot-vm-ip" {
depends_on=[azurerm_resource_group.tpot-rg]
name = "tpot-vm-ip"
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
allocation_method = "Static"
}
# Create Network Card for linux VM
resource "azurerm_network_interface" "tpot-vm-nic" {
depends_on=[azurerm_resource_group.tpot-rg]
name = "tpot-vm-nic"
location = azurerm_resource_group.tpot-rg.location
resource_group_name = azurerm_resource_group.tpot-rg.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.internal.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.tpot-vm-ip.id
}
}
output "public_ip" {
value = azurerm_public_ip.tpot-vm-ip.ip_address
}

Format the file

terraform fmt

Execute Checkov

Make sure you’re in the directory that your Terraform is in.

checkov -f main.tf

Results

We have seven failed checks. Looking through the list it is warning us for stuff that we have configured specifically like ports that are exposed to the public internet. Since this is the honeypot that we just configured in Azure Cybersecurity Labs — Part Four, we know that this works and we know that this is how it needs to be configured to work properly.

So let’s go ahead and deploy this to Azure.

Type az login in the terminal to establish your credentials if they aren’t cached already.

az login

Initialize the directory

terraform init

Now terraform plan

terraform plan

Note: Take a look at the Terraform Plan and see the 8 resources that we are creating. While not mandatory, it’s good practice to ‘Terraform Plan’ to review your changes BEFORE deploying.

Now terraform apply

terraform apply

Make sure you have previously deleted this project from Azure so that you can deploy it again.

Part Six

Now we’re getting into to new stuff. Prowler is an Open Source security tool to perform AWS, Azure, Google Cloud and Kubernetes security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness, and also remediations! We have Prowler CLI (Command Line Interface) that we call Prowler Open Source.

You can install Prowler using Pip3 like we did with Checkov in Azure Cybersecurity Labs — Part Five. So let’s do that.

pip3 install prowler

and then we run Prowler

prowler azure --az-cli-auth

The results are displayed on your screen and also exported to your ‘output directory’

I like view the HTML file and the use a html to jpg or html to pdf converter online. Our environment is a new environment so it doesn’t have much on here other than turning Microsoft Defender on for our resources which we do not currently have deployed. Using Prowler is very simple and the value that you are adding as a freelancer is discerning the results and narrowing it down for the business to what is useful and actionable to them.

Do not just give them this report and be done with it. They will be unhappy. Instead write specific recommendations in your own report with your own template with step-by-step instructions on how to fix each issue that is important to them.

And that wraps up the Cloud Security Jobs series but stick around for one BONUS as we discuss Serverless computing.

Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent.

You can connect with him on LinkedIn.

To view my dozens of courses, visit my homepage and watch the trailers!

Become a Black Badge member of Cyber NOW® and enjoy all-access for life.

Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success, winner of the 2024 Cybersecurity Excellence Awards.

--

--

Tyler Wall
Tyler Wall

Written by Tyler Wall

Founder of Cyber NOW Education | Husband & Father | Published Author | Instructor | Master Mason | 3D Printing & Modeling | Astrophotography

No responses yet