Sunday, November 21, 2021

Continuous Integration for Infrastructure as Code using Azure DevOps, Terraform & Docker

 How DevOps & Infrastructure as Code can supercharge your cloud deployments

Most companies aspire to release their products and services faster and more reliably, and agile IT systems and solutions are an essential ingredient.

However, the reality is that many companies manage IT infrastructure using a combination of manual effort, complicated processes and a bit of hope and prayer.

As an Azure infrastructure architect with Telstra Purple, I have the opportunity of helping a diverse range of organisations automate and streamline their cloud solutions.

One of the biggest factors was a lack of automation within their cloud infrastructure stack.

The application teams had begun to implement DevOps methodologies, however, without automated infrastructure, application deployment and testing still required a lot of manual and repeated effort.

Quite simply they had continuous integration but not continuous delivery.

This blog describes four key components required to power a fully automated, modern and streamlined infrastructure as code continuous integration and delivery pipeline.

Core Components

Infrastructure as Code (Iac)

One of the most critical but often overlooked components allowing customers to effectively implement DevOps and automate cloud systems is infrastructure as code.
IaC describes infrastructure in a code-based format which is stored in a version control system alongside existing application source code and deployed using continuous delivery and deployment pipelines.
Infrastructure as Code provides many benefits including…

  • Tight coupling between applications and their execution environments.
  • Code testing of infrastructure in the form of unit testing, functional testing and integration testing.
  • Application and infrastructure scalability.
  • Reduced code duplication.
  • More flexibility with disaster recovery solutions.
  • Immutable infrastructure preventing configuration drift.
  • Simplified change management (more standard changes).
  • Deployment speed, reducing a projects time to market.
  • More efficient use of IT staff time and resources
  • Shorter and tighter development and testing feedback loops.
  • Cost savings as infrastructure is shutdown when not in use.

Docker Containers
A Docker image is a lightweight, standalone, executable package of software which includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Docker makes it easy to create, deploy, and run portable, isolated and self-contained units of application code.

Infrastructure code executed inside containers allow developers to package up self-executing infrastructure modules, including dependencies and libraries and ship them out as a self-contained unit, which can rapidly be configured, deployed and updated.

Versioned Modules
Packaging infrastructure and dependencies inside containers become especially flexible when a solution focussed, and modularised approach is applied to infrastructure code. Containers become infrastructure building blocks, connected and arranged in flexible configurations.

Containers are versioned and tagged making them easy to reference and reconfigure.

Continuous Delivery
Integrating modularised infrastructure as code containers into a CI/CD system such as Azure DevOps provides the ability to tightly couple application code and the dependent underlying infrastructure into an automated end to end deployment solution.

Building a CI/CD Pipeline

Now it’s time to configure these core components into a continuous integration and delivery infrastructure as code pipeline.

Create a Terraform execution environment Docker container

Terraform relies on a number of core modules which change frequently. Keeping these updated on each developer’s pc is time consuming and can cause inconsistencies.

A Docker image containing all of the necessary deployment utilities including Terraform, environment variables and other configurations is a fundamental component of a continuous delivery pipeline.

Step 1. Install Docker Desktop
Docker can be run on any machine type. For this example, I’ll be using a Mac
1. Download docker from https://hub.docker.com/editions/community/docker-ce-desktop-mac/
2. Save Docker.dmg file and launch
3. Docker will be available to run in terminal

Step 2. Create a Terraform execution environment Dockerfile

A Dockerfile describes the configuration of a Docker image. This sample code builds a Docker image including the Terraform bundle. Ensure a file called terraform-bundle.hcl sits alongside the Dockerfile.
Terraform bundler is a Go based method for creating a Terraform installation package which incorporates modules configured in the terraform-bundle.hcl file.

# build step to create a Terraform bundle per our included terraform-bundle.hcl
FROM golang:alpine AS terraformbundler
ENV TERRAVER=v0.12.16
RUN apk --update add git unzip openssh-client && \
go get -d -v github.com/hashicorp/terraform && \
git -C ./src/github.com/hashicorp/terraform checkout $TERRAVER && \
go install ./src/github.com/hashicorp/terraform/tools/terraform-bundle
COPY terraform-bundle.hcl .
RUN terraform-bundle package -os=linux -arch=amd64 terraform-bundle.hcl && \
mkdir -p terraform-bundle && \
unzip -d terraform-bundle terraform_*.zip
view rawgistfile1.txt hosted with ❤ by GitHub

What is the code actually doing? 

  • Pulls a base Alpine Golang image from Docker Hub and labels it as terraformbundler.
  • Adds git, bash and unzip utilities using the apk package manager. 
  • Downloads and checks out terraform.
  • Uses Go to install the terraform-bundle. 
  • Copies the local terragform-bundle.hcl file to the current folder, inside the container. 
  • Runs terraform-bundler package and unzips the contents into a terraform-bundler folder. 
  • Terraform-bundler.hcl - This file contains the Terraform modules installed inside the Docker image.

Step 3. Create a container registry to store the container

A container registry is used to store the Docker image. In this example Azure Container Registry is used (ACR) but Docker Hub can also be used.

Create Azure container registry

az acr create
--resource-group myResourceGroup --name iacrepo --sku Basic

Login to container registry

az acr
login --name iacrepo.azurecr.io

 Step 4. Build a Dockerfile into an image

 Once scripted, the Dockerfile needs to be built and launched as a running container.

 The -t tags the image by the standard repository/image format and the . signifies the path to the Dockerfile. Ensure to tag the image by exactly the same name as the container registry.

docker build -t iacrepo.azurecr.io/terraform-exec-env .

Step 5. Push container to ACR

docker push iacrepo.azurecr.io/terraform-exec-environment

Developing infrastructure using the Terraform Execution Environment

Once the execution environment container has been built it must be integrated it into the infrastructure code development process.

There three main ways to inject source code into a running container.

  1. Using the ADD command in the Dockerfile to add in a folder at build time
  2. Using the Docker cp (copy) command to inject your source code folder into a running container
  3. Mounting a source code repository as a volume inside the Docker container.

Mounting a volume is the most sensible and flexible method for this approach.

Terraform code is usually developed in an IDE outside the container, but always executed inside the container.

Step 1. Run Dockerfile and mount code repository as a volume

The following command creates a running container incorporating all the necessary Terraform modules and the mounted repo and launches an interactive bash shell session.

 docker run -it -v "/Users/username/repo:/home/repo" iacrepo.azurecr.io/terraform-exec-environment /bin/bash 

Step 2. Run Terraform inside Docker container

The following code:

  1. Changes to the directory of the mounted volume code repository
  2. Sets execution permissions on the terraform executable file (which may be necessary)
  3. Initialises the Terraform project

cd /home/repo
chmod ugo+rwx terraform
/go/terraform-bundle/terraform init
/go/terraform-bundle/terraform plan
/go/terraform-bundle/terraform apply

 It makes sense to add terraform to the system path so it can be called without specifying the full path.

Incorporating continuous delivery

Managing infrastructure code using the same tools and practices as traditional software development helps integrate the previously separate development and operation teams.

Infrastructure as code allows continuous delivery to be incorporated into continuous integration.

Application code can now be automatically deployed onto the necessary infrastructure each time its complied and built.

To facilitate this process, each Terraform project needs to have a custom Dockerfile (septate but built upon the Terraform execution environment).

Step 1. Create a custom project specific Dockerfile

The Dockerfile, located in the Terraform project root, builds a container with the latest source code and executes a plan or apply along with input variables, passed as an argument at run time.

This creates a versatile and flexible infrastructure container which is integrated into Azure DevOps For those familiar with Jenkins it performs a similar function to a Jenkinsfile.

# Pull the base terraform execution environment image from the Azure container registry.
FROM iacrepo.azurecr.io/terraform-exec-env
# setup argument which is used to pass and customise Terraform execution command
ARG terraform_cmd
# make folder where source code will be injected into
RUN mkdir -p /home/repo/
# set the working directly to this location
WORKDIR "/home/repo/"
# inject source code into container during build
COPY ./ /home/repo/
# initialise Terraform using a remote backend.
# Azure connection details for test subscription are stored in uat.tfvars
RUN /go/terraform-bundle/terraform init -backend-config uat.tfvars && \
/go/terraform-bundle/terraform validate
# set container to automatically run terraform when the container is run
ENTRYPOINT [ "/go/terraform-bundle/terraform"]
# allow custom arguments to be passed to terraform such as plan or apply
CMD [$terraform_cmd]

Continuous integration and delivery for infrastructure code

Developing infrastructure code should follow the same methods and best practise as application code.

Applying continuous integration and delivery principals to infrastructure solutions ensures that code is regularly, built, inspected and tested and allows multiple developers to contribute to the same solutions.

After code is checked into a developer branch a pull request is raised to merge into a test branch.

A developer should paste the output of the Terraform plan into the pull request so it can be inspected by a reviewer.

Once the pull request is reviewed and approved, code is merged and a Terraform plan is run, the output of which can be compared to the expected results.

Once the CI build has completed successfully a Terraform release pipeline is queued. A gate can be configured to require approval before the release pipeline is executed and built.

Approval of the release pipeline build is subject to a successful code merge and acceptance of the Terraform plan.

The release pipeline is similar to the build pipeline except that a Terraform apply instead of plan is run.

Once the release pipeline has completed executing, infrastructure is built and can be inspected and tested.

The following section documents the Azure DevOps configuration steps necessary to configure continuous integration and delivery as described.
 

Step 1. Create a continuous integration build pipeline

The build is triggered once a pull request to merge branches is raised and accepted.The pipeline builds a Dockerfile into an image containing the Terraform project code and pushes it to Azure Container Registry.

There is a final Terraform plan step which outputs the results of a Terraform plan. This is useful for comparing against the plan steps a developer provided in the pull request.

The arguments section provides Terraform with command line arguments, in this case plan with the path to the input variable file.

Step 2. Set up a build validation branch policy

Enabling build validation ensures that the Docker build triggered by a pull request must succeed before the changes from the dev branch are merged into the test branch.

Step 3. Create a continuous delivery release pipeline. Once a successful Docker build and ACR push has occurred infrastructure can be applied and built. 

A continuous integration trigger must be enabled to ensure the release pipeline is run after a successful build. An approval gate can be added so that the deployment must be approved before it is executed.

The approver is usually the person who accepted and approved the initial dev to test pull request. 

The argument section in the release pipeline passes a Terraform apply instead of a plan.

 

The final end to end workflow

Once the core components are in place the new infrastructure development process looks like this.

1.      An Infrastructure developer runs a bash shell inside the containerised execution environment and mounts their Terraform project code as a volume.

2.      Terraform code is always run from inside the container which ensures that all developers are creating and testing code in identical environments.

3.      When code is ready for testing and branch merging, a pull request is raised. The container is tagged with a version number or tag which allows specific infrastructure configurations to be easily referenced.

4.      The developer copies the output of a terraform plan command into the pull request which allows an approver to understand what a particular pull request will achieve. 

5.      Once the pull request is accepted an Azure DevOps CI pipeline builds a container, executes a Terraform plan command and pushes the container into an Azure Container Registry (ACR).

6.      After the plan is inspected, compared, and accepted, a release pipeline is approved to perform a Terraform apply which deploys the infrastructure into a test environment.  

7.     Once the infrastructure deployment is successfully reviewed and tested it can be confidently integrated into existing application continuous delivery pipelines.

Conclusion

Continuous integration and delivery for infrastructure code has proved very beneficial to our customers as application and infrastructure teams now subscribe to the same methodologies, paving the way for tighter integration and more seamless DevOps adoption.

Time to market and reliability of cloud infrastructure deployments have significantly improved.

If your business would like to take advantage of the latest cloud technologies or has been struggling with persistent challenges, please reach out and set up a complimentary strategy session with one of our specialist consultants. 


Deploy to Azure Container Instances with Docker Desktop

 

Today we’re excited about the first release of the new Docker Desktop integration with Microsoft Azure. Last month Microsoft and Docker announced this collaboration, and today you can experience it for yourself.

The new edge release of Docker Desktop provides an integration between Docker and Microsoft Azure that enables you to use native Docker commands to run your applications as serverless containers with Azure Container Instances.

You can use the Docker CLI to quickly and easily sign into Azure, create a Container Instances context using an Azure subscription and resource group, then run your single-container applications on Container Instances using docker run. You can also deploy multi-container applications to Container Instances that are defined in a Docker Compose file using docker compose up.

Code-to-Cloud with serverless containers

Azure Container Instances is a great solution for running a single Docker container or an application comprised of multiple containers defined with a Docker Compose file. With Container Instances, you can run your containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. Because there is no infrastructure or platform management overhead, Container Instances caters to those who need to quickly run containers in the cloud.

Container Instances is also a good target to run the same workloads in production. In production cases, we recommend leveraging Docker commands inside of an automated CI/CD flow. This saves time having to rewrite configuration files because the same Dockerfile and Docker Compose files can be deployed to production with tools such as GitHub Actions. Container Instances also has a pay-as-you-go pricing model, which means you will only be billed for CPU and memory consumption per second, only when the container is running.

Let’s look at the new Docker Azure integration using an example. We have a worker container that continually pulls orders off a queue and performs necessary order processing. Here are the steps to run this in Container Instances with native Docker commands:

Create and run a container in Azure Container Instances using the Docker CLI

Run a single container

As you can see from the above animation, the new Docker CLI integration with Azure makes it easy to get a container running in Azure Container Instances. Using only the Docker CLI you can log in to Azure with multi-factor authentication and create a Docker context using Container Instances as the backend. Detailed information on Container Instances contexts can be found in the documentation.

Once the new Container Instances context is created it can be used to target Container Instances with many of the standard Docker commands you likely already use; like docker run, docker ps, and docker rm. Running a simple docker run <image> command will start a container in Container Instances using the image that is stored in a registry like Docker Hub or Azure Container Registry. You can run other common Docker commands to inspect, attach-to, and view logs from the running container.

Use Docker Compose to deploy a multi-container app

We see many containerized applications that consist of a few related containers. Sidecar containers often perform logging or signing services for the main container. With the new Docker Azure integration, you can use Docker Compose to describe these multi-container applications.

You can use a Container Instances context and a Docker Compose file as part of your edit-build-debug inner loop, as well as your CI/CD flows. This enables you to use docker compose up and down commands to spin up or shut down multiple containers at once in Container Instances.

Visual Studio Code for an even better experience

The Visual Studio Code Docker extension provides you with an integrated experience to start, stop, and manage your containers, images, contexts, and more. Use the extension to scaffold Dockerfiles and Docker Compose files for any language. For Node.js, Python, and .NET, you get integrated, one-click debugging of your app inside the container. And then of course there is the Explorer, which has multiple panels that make the management of your Docker objects easy from right inside Visual Studio Code.

Use the Containers panel to list, start, stop, inspect, view logs, and more.

 o	Containers panel in the Docker Explorer displays all containers and allows you to manage them

From the Images panel you can list, pull, tag, and push your images.

 o	Images panel in the Docker Explorer displays all images and allows you to manage them
Connect to Azure Container Registry and Docker Hub in the Registries panel to view and manage your images in the cloud. You can even deploy straight to Azure.

 o	Registries panel in the Docker Explorer displays registries you have connect to allow pushing and pulling of images

The Contexts panel lets you list all your contexts and quickly switch between them. When you switch context, the other panels will refresh to show the Docker objects from the selected context. Container Instances contexts will be fully supported in the next release of the docker extension.

o	Contexts panel in the Docker Explorer displays all contexts and allows you to switch between them

Try it out

To start using the Docker Azure integration install the Docker Desktop edge release. You can leverage the current Visual Studio Code Docker extension today, Container Instances context support will be added very soon.

To learn more about the Docker Desktop release, you can read this blog post from Docker. You can find more information in the documentation for using Docker Container Instances contexts.

Performing a Pen Test after each Deployment using OWASP ZAP, Azure Container Instances, and Azure DevOps

 


OWASP Zed Attack Proxy (ZAP) is an open source tool performing pen testing on web 

applications and APIs. Pen testing a web application helps ensure that there are no security

 vulnerabilities hackers could exploit. OWASP ZAP can be installed as a client application

 or comes configured on a docker container. The container option is a great solution for 

incorporating pen testing into your DevOps practices and Software Delivery Pipeline to

 perform a pen test on each deployment of your application.

In Azure, there are several options for using containers. These options include Azure Container

 Services (ACS), Azure Kubernetes Service (AKS), and Azure Container Instances (ACI).

 I originally wrote a script that uses a Docker Swarm cluster in ACS but this required always 

running virtual machines running in the background. AKS is a fully managed Kubernetes 

service but Kubernetes provided a ton of features that I didn’t really need with this deployment.

 ACI provides a consumption based option for using containers. This is the perfect tool to to 

spin up the container, run the scan, and discard the container after it completes. If you’re 

looking for professional penetration testing services, have a look into the penetration testing

 cost to better understand what you’re paying for.

The solution for running the pen test includes a PowerShell script to create the Azure resources from

 a resource group and execute the scan. There is also a .NET console app that is used to create the

 bugs and attach the OWASP report in Azure DevOps. The solution has been posted on GitHub. Please 

reach out with an issue for any questions or if you have any problems.

https://github.com/Deliveron/owasp-zap-vsts-extension

I’m using Azure Pipelines to execute the OWASP ZAP pen test against the application after it has

 been deployed. I perform this by executing a custom PowerShell script along with a command line

 utility that updates Azure DevOps with the scan results and creates bugs for any issues found to 

provide actionable work for the developers to trace fixing the issues.

The PowerShell script utilizes a new or existing resource group and the target location to create the ACI 

resource attached to a storage account for retrieval of the reports. Once the scan completes, the reports are

 attached and bugs created. The ACI and storage accounts are deleted.

https://github.com/Deliveron/owasp-zap-vsts-extension/blob/master/scripts/Invoke-OwaspZapAciBaseline.ps1

The command line utility will attach the OWASP ZAP report and create the bugs into Azure DevOps. 

This will need to be compiled and included as an artifact in your release definition.

Use a command line task to execute the following commands. Here are the settings for Attaching the 

Reports. Be sure to modify this to include your organization, team project, and personal access token.

Tool 
$(System.DefaultWorkingDirectory)/owasp-zap/drop/owasp-zap-vsts-tool/
bin/Release/owasp-zap-vsts-tool.exe
Arguments 
attachreport collectionUri="https://youraccount.visualstudio.com" teamProjectName=
"Showcase" releaseUri=$(Release.ReleaseUri) releaseEnvironmentUri=$(Release.EnvironmentUri) filepath=$(System.DefaultWorkingDirectory)\testreport.html personalAccessToken="123456789"

Here are the command line task settings for Creating the Bugs. In this be sure to replace the organization,

 team project, team, target URL, and personal access token.

Tool 
$(System.DefaultWorkingDirectory)/owasp-zap/drop/owasp-zap-vsts-tool
/bin/Release/owasp-zap-vsts-tool.exe
Arguments 
createbugfrompentest collectionUri="https://youraccount.visualstudio.com"
teamProjectName="Showcase" team="Showcase Team" releaseUri=$(Release.ReleaseUri) releaseEnvironmentUri=$(Release.EnvironmentUri) filepath=
$(Agent.ReleaseDirectory)\issues.xml  prefix="No WAF" targetUrl=
"https://mywebsite.azurewebsites.net" failOnHigh=False 
personalAccessToken="123456789"

I’m actively working on a full-fledged Azure DevOps extension so you can more easily install it in 

your Azure DevOps instance and not have to compile command line project. Let me know how it 

works for you.

Saturday, November 20, 2021

Integrating security testing into an Azure DevOps pipeline – OWASP ZAP

 


One of the most effective ways of enhancing the security posture of a solution is to incorporate security into the development lifecycle and embed it within the normal CI/CD pipelines of a project. As a CI/CD pipeline should complete within only a few minutes testing should be minimal to allow the identification of critical issues, in order to reduce the need for long running processes as part of this pipeline, but ensure a complete assessment, a regular nightly process should run to perform more thorough assessment of the solution.

In this post I am going to look at the Passive Pentest stage of the CI/CD Pipeline, I will cover the other stages in another post. The tool I normally choose for penetration testing is OWASP ZAP. OWASP is a worldwide not-for-profit organization dedicated to helping improve the quality of software. The Zed Attack Proxy (ZAP) is a free penetration testing tool for beginners to professionals. ZAP includes an API and a weekly docker container image that can be integrated into your deployment process.

 

There is a set of scripts available from https://github.com/Deliveron/owasp-zap-vsts-extension which help with the tasks in this pipeline. And make it a fairly simple process of integrating into you process. Unfortunately these scripts use the old Azure RM powershell module, I have updated the baseline script, to use the newer Az module. You can download a copy here.

The entire stage consists of 3 tasks:

  1. ACI Pentest baseline
  2. Transform the results
  3. Publish the test results

 

Step 1 – run the baseline scan

The first task needs to run the PowerShell script Invoke-OwaspZapAciBaseline.ps1, this script will configure a resource group and storage account, download the latest OWASP-ZAP container image run this within the Azure Container Service. Then initiate a baseline scan of the target system, retrieve the test results and then destroy the resources.

 

Step 2 – Transform the report

In order to be able to make use of the output from the scan this needs to be transformed into a supported format, using an XLST. This is again done via a bit of powershell script.

This uses the XLST (OWASPToNUnit3.xslt)and powershell script (Transform.ps1) contained in the resource zip file available here.

 

Step 3 – publish the test results

The third and final step in the pipeline is to publish the results of the test to Azure DevOps, this can be done using the native Publish Test Results task.

 

You can view the number of identified issues form the release screen.

Clicking on this will then take you a list of the issues, and from here you can either view further details or choose to raise a bug within the backlog for the identified issue.

Free hosting web sites and features -2024

  Interesting  summary about hosting and their offers. I still host my web site https://talash.azurewebsites.net with zero cost on Azure as ...