Sunday, August 8, 2021

Create Multi Stage YAML CI/CD pipeline for deploying database changes using Maven, Liquibase and Azure DevOps

 

Overview of Liquibase

Liquibase is an open source java based library which can be used to define, deploy and manage database changes against all major database management systems. It is more covered in the detail here. In the mentioned post, we have discussed basic concepts of liquibase and how it works with demonstration. We have discussed setup and configuration of liquibase in this post. You need to be familiar with the same before moving on.

Creating Liquibase Specific configuration

Create the liquibase.properties file

The starting point for Liquibase is liquibase.properties file which is used to store Liquibase configuration settings and database connection information. By default it searches for file named liquibase.properties to find out the configuration but this file can have any name. Below is one of the samples for the file:

driver: org.postgresql.Driver
classpath: ./lib/postgresql-42.2.5.jar
url: jdbc:postgresql://localhost:5432/sampledb
username: adminUsername
password: adminUserPassword
changeLogFile: src/main/resources/db.changelog.xml

In above file, driver represents the JDBC driver for the target database system, classpath represents the filepath to find the JDBC driver, url represents the connection url and username,password represents the authentication information for the target database system.

We’ll replace above mentioned values with the appropriate values of the target database.

Specifying database changes using changeLogFile file

Note that almost all the values mentioned in the liquibase.properties file specify the properties of the target values. So how do we define database changes? That is mentioned using the src/main/resources/db.changelog.xml file and it is called as the changeLog file. The changeLogFile contains the changesets or references to the target sql changes that needs to be executed. The contents of the changeLogFile typically looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ext="http://www.liquibase.org/xml/ns/dbchangelog-ext"
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.0.xsd
http://www.liquibase.org/xml/ns/dbchangelog-ext http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-ext.xsd">
<include file="src/main/resources/issue-8534/create_table_link.sql"/>
<include file="src/main/resources/issue-8534/insert_link.sql"/>
<include file="src/main/resources/issue-8534/alter_table_link.sql"/>
</databaseChangelog>

In the above code, the databaseChangeLog tag determines the schema version for liquibase. The include tag contains the references to the sql files containing database code for the execution. The path for the files can be or cannot be relative to the location of databaseChangeLog file. If it is relative to the changeLogFile location, we need to add an attribute named relativeToChangelogFile inside the include tag and set it to true.

Defining Database Changes

Database changes can be mentioned in four formats: XML, JSON, YAML and native SQL. All of these options are discussed in this blog post. Native SQL is most preferred way since there is no need for extra learning and it is more flexible than other formats. The general structure of a native sql file is as below:

The first line tells liquibase that we want to use native SQL for specifying changes. The next line defines a changeset and mentiones author name and id of the changeset. A changeset is the unit of execution for mentioning change. The part before the ‘:’ is the author and the part after the ‘:’ is the name of the changeset itself. It makes a lot of sense to use meaningful names and naming patterns here, e.g. including the release information into the names of the changesets. Then we specifying actual sql to be used for change.

One really nice feature is the possibility to define rollback statements. If used properly, it allows to return to an earlier state of the database. It should be noted that a Liquibase-formatted SQL-file can contain several SQL-statements and thus several rollback statements. In our case, if we need to rollback our change, the last line contains the rollback statement to be used. More details on rollback are covered in this post.

In our case, we have defined three change files: create_table_link.sql (to create a table named link), insert_link.sql (to insert records in table) and alter_table_link.sql (to add an extra column to the table).

Maven Integration with Liquibase

Since Liquibase itself is a Java library, it makes sense to design directory structure based on Maven or Spring boot or Gradle etc. We’ll be using Maven for this post’s purpose and basing the directory structure on it. For the time being let’s assume we are having a Maven project already and all Liquibase project files are stored in “src/main/resources/” directory.

To use Liquibase, we should add it as a dependency to our POM-file. The most recent releases can be found from here. Below is the code for same:

..
</dependencies>
..
<!– Liquibase –>
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
<version>3.3.0</version>
</dependency>
</dependencies>
..

We also need to add JDBC driver for target database since we cannot assume it would be present on the build/release agent. Since we are targeting postgreSQL in our case, below is the code for same:

..
</dependencies>
..
<!– PostgreSQL –>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.5</version>
</dependency>
</dependencies>
..

After this, to make use of Maven to execute Liquibase the following plugin configuration must be added:

..
<plugins>
..
<!– User liquibase plugin –>
<plugin>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-maven-plugin</artifactId>
<version>3.0.5</version>
<configuration>
<propertyFile>src/main/resources/liquibase.properties</propertyFile>
</configuration>
<!–
<executions>
<execution>
<goals>
<goal>update</goal>
</goals>
</execution>
</executions>
–>
</plugin>
..
</plugins>
..

The thing to note here is the path to liquibase.properties file which contains the default configuration file for Liquibase. This file not only contains the target SQL information but it also points to the changeLogFile which again contains the changeSets.

It is possible to execute Liquibase from within an IDE or locally from the command line by starting Maven as follows:

1
mvn liquibase:update

However, we have commented the executions tag, as we’ll run it from the command line during the release process.

We would also need to package all files i.e. database changes, the dependencies, properties files, etc into one zip file so that we can unpack the same during time of release and execute it from there. To create a ZIP-file containing all relevant Liquibase-files, we can use the Maven assembly-plugin using below code:

<plugins>
..
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.5.3</version>
<configuration>
<descriptor>src/assembly/assembly.xml</descriptor>
<appendAssemblyId>false</appendAssemblyId>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
..
</plugins>

In above, we have defined the file src/assembly/assembly.xml which contains the packaging information for creating the zip file. In our case, this is as below:

..
<fileSets>
<fileSet>
<directory>${project.basedir}/scripts</directory>
<outputDirectory>/scripts/</outputDirectory>
<includes>
<include>**/*</include>
</includes>
<lineEnding>unix</lineEnding>
</fileSet>
<fileSet>
<directory>${project.basedir}/src/main/resources/</directory>
<outputDirectory>/src/main/resources/</outputDirectory>
<includes>
<include>**/*</include>
</includes>
<lineEnding>unix</lineEnding>
</fileSet>
</fileSets>
<dependencySets>
<dependencySet>
<outputDirectory>/lib/</outputDirectory>
<useProjectArtifact>true</useProjectArtifact>
<scope>runtime</scope>
</dependencySet>
</dependencySets>
..

view raw
assembly.xml
hosted with ❤ by GitHub

Now in our CI build, we can then have zip file generated as the artifact, which contains all the files which we need during the release process.

Defining YAML Pipeline for Azure DevOps

As we discussed in one of our earlier posts, the YAML pipeline can consist of both CI and CD tasks or can contain them individually. So we’ll need to define them both in our pipeline.

Defining YAML for build process

First, we need to name our stage. Let’s name it as ‘Build’ for convenience. Now this stage can consists of multiple jobs which can be executed in parallel on various available agents. However, that is not required in our case. So we’ll create a single job named ‘Build’ for our purpose. Since our job is going to be run on Ubuntu machine, let’s use Hosted Agent Pool ‘Hosted Agent 1604’:

stage: Build
jobs:
job: Build
pool: 'Hosted Ubuntu 1604'
continueOnError: false

We have also specified an additional attribute named continueOnError which states that we do not want to proceed further in case of failures.

Now we need to define two tasks: One for the maven compilation and creating the zip artifact and One for uploading the artifact to the Azure DevOps. For defining the maven compilation and creation of zip artifact, we can use below YAML:

..
task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
goals: 'clean package'
publishJUnitResults: true
testResultsFiles: '**/surefire-reports/TEST-*.xml'
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
..

The important detail here is the name of the pom file and the Maven goals. Note that we do not need to perform any special task for creating zip file as we have created zip artifact as part of maven lifecycle process itself.

For uploading the artifact to Azure DevOps, we can use below YAML code:

..
task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.SourcesDirectory)/target'
ArtifactName: 'drop'
publishLocation: 'Container'
..

Since, build artifacts are created in the target directory, we have specified that upload files from that directory to Azure DevOps.

Defining YAML for the release process

During the release process, first we need to download the artifact to the release agent. Again, we would first need to define an stage, an job and then perform all steps inside that job. Also, we need to define the agent pool for the job to run.

To download artifacts from Azure DevOps to release agent, we can use below YAML code:

..
task: DownloadBuildArtifacts@0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: drop
itemPattern: '**/*.zip'
downloadPath: '$(System.ArtifactsDirectory)'
..

Note that we have specified that we want to download only zip artifact by using the itemPattern field. We are also downloading artifacts to predefined path by the release variable: $(System.ArtifactsDirectory), so that we can refer it easily in the subsequent paths.

Once the artifact is downloaded, we need to navigate to the directory, unzip it and then run the liquibase commands. We can do all of this easily by using bash commands and specify those as part of the YAML code:

..
task: Bash@3
inputs:
targetType: 'inline'
script: |
cd $(System.ArtifactsDirectory)
unzip ./drop/liquibase-demo.zip -d liquibase-demo
cd ./liquibase-demo
java -jar ./lib/liquibase-core-3.3.0.jar –defaultsFile=./src/main/resources/liquibase.properties update
..

Important thing to note here is the path of the properties file which contains the configuration for the liquibase to run.

We can define multiple stages as part of the release process for multiple environments.

Run the Azure DevOps Pipeline

If you have done the configuration properly and checked-in the source code, Azure DevOps will read the azure-pipelines.yml file to create the pipeline. We can then run the pipeline and see it in action:

database-ci-cd-pipeline-azure-devops

Summary and Notes

Deploying database changes as part of the pipeline is fast emerging and getting established as part of the DevOps process. One of the popular tools for deploying database changes is Liquibase. In this post, we have explored how we can create an end to end CI/CD pipeline for deploying changes.

The full copy of the source code used in this blog post can be found on GitHub at here.

Please note that you may not be able to view source code, if you are viewing this post on mobile due to AMP restrictions. In such a case, open this blog post in a full browser to view the same.

Deploying a multi-container application to Azure Kubernetes Services

Summary: Excellent article for deploying application to AKS. Steps explained are .

Deploy Kubernetes to Azure, using CLI:

 az group create --name akshandsonlab --location <region>
 az aks create --resource-group akshandsonlab --name <unique-aks-cluster-name> --enable-addons monitoring --kubernetes-version $version --generate-ssh-keys --location <region>
az acr create --resource-group akshandsonlab --name <unique-acr-name> --sku Standard --location <region>
Authenticate with Azure Container Registry from Azure Kubernetes Service
az aks update -n $AKS_CLUSTER_NAME -g $AKS_RESOURCE_GROUP --attach-acr $ACR_NAME
In this article micrsoft start up for azure dev ops project used to explain the complete multi stage set up.   Required code and yaml files will be available by the time project created from template . All the azure resource configuraiton like ACR  , web app name should be replaced in the release and build pile line.         

Overview

Azure Kubernetes Service (AKS) is the quickest way to use Kubernetes on Azure. Azure Kubernetes Service (AKS) manages your hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking your applications offline. Azure DevOps helps in creating Docker images for faster deployments and reliability using the continuous build option.

One of the biggest advantage to use AKS is that instead of creating resources in cloud you can create resources and infrastructure inside Azure Kubernetes Cluster through Deployments and Services manifest files.

Lab Scenario

This lab uses a Dockerized ASP.NET Core web application - MyHealthClinic (MHC) and is deployed to a Kubernetes cluster running on Azure Kubernetes Service (AKS) using Azure DevOps.

There is a mhc-aks.yaml manifest file which consists of definitions to spin up Deployments and Services such as Load Balancer in the front and Redis Cache in the backend. The MHC application will be running in the mhc-front pod along with the Load Balancer.

The following image will walk you through all the steps explained in this lab

If you are new to Kubernetes, click here for description of terminology used in this lab.

What’s covered in this lab

The following tasks will be performed:

  • Create an Azure Container Registry (ACR), AKS and Azure SQL server

  • Provision the Azure DevOps Team Project with a .NET Core application using the Azure DevOps Demo Generator tool.

  • Configure application and database deployment, using Continuous Deployment (CD) in the Azure DevOps

  • Initiate the build to automatically deploy the application

MS teamsWant additional learning? Check out the Automate multi-container Kubernetes deployments module on Microsoft Learn.

Before you begin

  1. Refer the Getting Started page to know the prerequisites for this lab.

  2. Click the Azure DevOps Demo Generator link and follow the instructions in Getting Started page to provision the project to your Azure DevOps.

    For this lab the Azure Kubernetes Service template is used which is already selected when you click on the link above. There are some additional extensions required for this lab and can be automatically installed during the process.

    AKStemplate

Setting up the environment

The following azure resources need to be configured for this lab:

Azure resourcesDescription
Azure Container Registry Azure Container RegistryUsed to store the Docker images privately
AKS AKSDocker images are deployed to Pods running inside AKS
Azure SQL Server Azure SQL ServerSQL Server on Azure to host database
  1. Launch the Azure Cloud Shell from the Azure portal and choose Bash.

  2. Deploy Kubernetes to Azure, using CLI:

    i. Get the latest available Kubernetes version in your preferred region into a bash variable. Replace <region> with the region of your choosing, for example eastus.

      version=$(az aks get-versions -l <region> --query 'orchestrators[-1].orchestratorVersion' -o tsv)
    

    ii. Create a Resource Group

      az group create --name akshandsonlab --location <region>
    

    iii. Create AKS using the latest version available

     az aks create --resource-group akshandsonlab --name <unique-aks-cluster-name> --enable-addons monitoring --kubernetes-version $version --generate-ssh-keys --location <region>
    
  3. Deploy Azure Container Registry(ACR): Run the below command to create your own private container registry using Azure Container Registry (ACR).

     az acr create --resource-group akshandsonlab --name <unique-acr-name> --sku Standard --location <region>
    
  4. Authenticate with Azure Container Registry from Azure Kubernetes Service : When you’re using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. You can set up the AKS to ACR integration in a few simple commands with the Azure CLI. This integration assigns the AcrPull role to the managed identity associated to the AKS Cluster. Replace the variables $AKS_RESOURCE_GROUP, $AKS_CLUSTER_NAME, $ACR_NAME with appropriate values below and run the command.

     az aks update -n $AKS_CLUSTER_NAME -g $AKS_RESOURCE_GROUP --attach-acr $ACR_NAME
    

    For more information see document on how to Authenticate with Azure Container Registry from Azure Kubernetes Service

  5. Create Azure SQL server and Database: Create an Azure SQL server.

     az sql server create -l <region> -g akshandsonlab -n <unique-sqlserver-name> -u sqladmin -p P2ssw0rd1234
    

    Create a database

     az sql db create -g akshandsonlab -s <unique-sqlserver-name> -n mhcdb --service-objective S0
    
  6. The following components - Container RegistryKubernetes ServiceSQL Server along with SQL Database are deployed. Access each of these components individually and make a note of the details which will be used in Exercise 1.

    Deploy to Azure

  7. Select the mhcdb SQL database and make a note of the Server name.

    Deploy to Azure

  8. Click on “Set server Firewall” and enable “Allow Azure services …” option.

    Allow Services

  9. Navigate to the resource group, select the created container registry and make a note of the Login server name.

    Deploy to Azure

Now you have all the required azure components to follow this lab.

Exercise 1: Configure Build and Release pipeline

Make sure that you have created the AKS project in your Azure DevOps organization through Azure DevOps Demo Generator (as mentioned in pre-requisites). We will manually map Azure resources such as AKS and Azure Container Registry to the build and release definitions.

  1. Navigate to Pipelines –> Pipelines.

    build

  2. Select MyHealth.AKS.Build pipeline and click Edit.

    build

  3. In Run services task, select your Azure subscription from Azure subscription dropdown. Click Authorize.

    azureendpoint

    You will be prompted to authorize this connection with Azure credentials. Disable pop-up blocker in your browser if you see a blank screen after clicking the OK button, and please retry the step.

    This creates an Azure Resource Manager Service Endpoint, which defines and secures a connection to a Microsoft Azure subscription, using Service Principal Authentication (SPA). This endpoint will be used to connect Azure DevOps and Azure.

  4. Following the successful authentication, select appropriate values from the dropdown - Azure subscription and Azure Container Registry as shown.

    Repeat this for the Build services, Push services and Lock services tasks in the pipeline.

    updateprocessbd

    TasksUsage
    Replace tokensreplace ACR in mhc-aks.yaml and database connection string in appsettings.json
    icon Run servicesprepares suitable environment by pulling required image such as aspnetcore-build:1.0-2.0 and restoring packages mentioned in .csproj
    icon Build servicesbuilds the docker images specified in a docker-compose.yml file and tags images with $(Build.BuildId) and latest
    icon Push servicespushes the docker image myhealth.web to Azure Container Registry
    publish-build-artifacts Publish Build Artifactspublishes mhc-aks.yaml & myhealth.dacpac files to artifact drop location in Azure DevOps so that they can be utilized in Release Definition

    applicationsettings.json file contains details of the database connection string used to connect to Azure database which was created in the beginning of this lab.

    mhc-aks.yaml manifest file contains configuration details of deploymentsservices and pods which will be deployed in Azure Kubernetes Service. The manifest file will look like as below

    For more information on the deployment manifest, see AKS Deployments and YAML manifests

  5. Click on the Variables tab.

    Update ACR and SQLserver values for Pipeline Variables with the details noted earlier while configuring the environment. updateprocessbd

  6. Save the changes.

    updateprocessbd

  7. Navigate to Pipelines | Releases. Select MyHealth.AKS.Release pipeline and click Edit.

    release

  8. Select Dev stage and click View stage tasks to view the pipeline tasks.

    releasetasks

  9. In the Dev environment, under the DB deployment phase, select Azure Resource Manager from the drop down for Azure Service Connection Type, update the Azure Subscription value from the dropdown for Execute Azure SQL: DacpacTask task.

    update_CD3

  10. In the AKS deployment phase, select Create Deployments & Services in AKS task.

    Update the Azure SubscriptionResource Group and Kubernetes cluster from the dropdown. Expand the Secrets section and update the parameters for Azure subscription and Azure container registry from the dropdown.

    Repeat similar steps for Update image in AKS task.

    • Create Deployments & Services in AKS will create the deployments and services in AKS as per the configuration specified in mhc-aks.yaml file. The Pod, for the first time will pull up the latest docker image.

    • Update image in AKS will pull up the appropriate image corresponding to the BuildID from the repository specified, and deploys the docker image to the mhc-front pod running in AKS.

    • A secret called mysecretkey is created in AKS cluster through Azure DevOps by using command kubectl create secret in the background. This secret will be used for authorization while pulling myhealth.web image from the Azure Container Registry.

  11. Select the Variables section under the release definition, update ACR and SQLserver values for Pipeline Variables with the details noted earlier while configuring the environment. Select the Save button.

    releasevariables

Exercise 2: Trigger a Build and deploy application

In this exercise, let us trigger a build manually and upon completion, an automatic deployment of the application will be triggered. Our application is designed to be deployed in the pod with the load balancer in the front-end and Redis cache in the back-end.

  1. Select MyHealth.AKS.build pipeline. Click on Run pipeline

    manualbuild

  2. Once the build process starts, select the build job to see the build in progress.

    clickbuild

  3. The build will generate and push the docker image to ACR. After the build is completed, you will see the build summary. To view the generated images navigate to the Azure Portal, select the Azure Container Registry and navigate to the Repositories.

    imagesinrepo

  4. Switch back to the Azure DevOps portal. Select the Releases tab in the Pipelines section and double-click on the latest release. Select In progress link to see the live logs and release summary.

    releaseinprog

    release_summary1

  5. Once the release is complete, launch the Azure Cloud Shell and run the below commands to see the pods running in AKS:

    1. Type az aks get-credentials --resource-group yourResourceGroup --name yourAKSname in the command prompt to get the access credentials for the Kubernetes cluster. Replace the variables yourResourceGroup and yourAKSname with the actual values.

      Kubernetes Service Endpoint

    2. kubectl get pods

      getpods

      The deployed web application is running in the displayed pods.

  6. To access the application, run the below command. If you see that External-IP is pending, wait for sometime until an IP is assigned.

    kubectl get service mhc-front --watch

    watchfront

  7. Copy the External-IP and paste it in the browser and press the Enter button to launch the application.

    finalresult

Kubernetes resource view in the Azure portal (preview)

The Azure portal includes a Kubernetes resource viewer (preview) for easy access to the Kubernetes resources in your Azure Kubernetes Service (AKS) cluster. Viewing Kubernetes resources from the Azure portal reduces context switching between the Azure portal and the kubectl command-line tool, streamlining the experience for viewing and editing your Kubernetes resources. The resource viewer currently includes multiple resource types, such as deployments, pods, and replica sets.

The Kubernetes resource view from the Azure portal replaces the AKS dashboard add-on, which is set for deprecation.

resource review

More information found at: https://docs.microsoft.com/en-us/azure/aks/kubernetes-portal

Summary

Azure Kubernetes Service (AKS) reduces the complexity and operational overhead of managing a Kubernetes cluster by offloading much of that responsibility to the Azure. With Azure DevOps and Azure Container Services (AKS), we can build DevOps for dockerized applications by leveraging docker capabilities enabled on Azure DevOps Hosted Agents.

Reference

Thanks to Mohamed Radwan for making a video on this lab. You can watch the following video that walks you through all the steps explained in this lab