Interesting practices in cloud dev ops mainly for building, testing , deploying and security aspects .
Lot of exams dumps.
Articles help to build dockers and complete CI/CD automation for your applications. 1) Azure - Micro services 2)
Azure Dev ops (CI/CD) 3) Identity 4) secure dev ops
.
give call on (+61)0426622462 or email to :dasaradhreddyk@gmail.com
Excellent article with multi stage azure dev ops pipe line created to push dcker image to VM and docker compose used to deploy multiple containers to VM though SSH component .
Stage 1 is BUILD_docker_containers: Here build and pushing artifacts to ACR explained.
Stage 2 is DEPLOY_to_production:SSH deployment task used for this.
As last stage of deployment multiple containers to prodution application and DB containers has been plugged together with docker compose file.
I've been fighting a bit with some of the Azure DevOps pipeline tasks trying to configure end-to-end solution for one of my side project. It is based on a good old Docker Compose and I am pretty happy with how it works in production. What I wanted to do is schematically described down below.
What is Azure DevOps
Azure DevOps helps to plan smarter, collaborate better, and ship faster with a set of modern dev services. It's a end-to-end solution for any software development cycle. Anyone can use it even for free with some limitations/conditions of usage (public projects, limited pipeline minutes per month etc). And it's free unlimited git!
Multistage pipeline
If you are not familiar with multistage pipeline concept, have a look. In short, it brings all CI/CD experience into yaml, where you define all your stages (no UI.. ). It's been really big missing thing for a while since only build pipeline could be explain like this..
stages:-stage: Build_docker_containers
jobs:-job: Build
pool:vmImage:'Ubuntu-16.04'continueOnError:truesteps:-task: Docker@2
inputs:containerRegistry:'AZURE-CONTAINER-REGISTRY-NAME'repository:'AZURE-CONTAINER-REGISTRY-REPOSITORY-NAME'command:'buildAndPush'Dockerfile:'**/Dockerfile'-task: PublishPipelineArtifact@1
inputs:targetPath:'$(Pipeline.Workspace)'artifact:'docker-compose'publishLocation:'pipeline'-stage:'Deploy_to_production'jobs:-deployment: Production
pool:vmImage:'Ubuntu-16.04'environment:'Production'strategy:runOnce:deploy:steps:-task: CopyFilesOverSSH@0
inputs:sshEndpoint:'SSH-END-POINT-NAME-FROM-SERVICE-CONNECTIONS'sourceFolder:'$(Pipeline.Workspace)/docker-compose/s/'contents:|
docker-compose.yaml
.envtargetFolder:'TARGET-PATH'-task: SSH@0
inputs:sshEndpoint:'SSH-END-POINT-NAME-FROM-SERVICE-CONNECTIONS'runOptions:'inline'inline:|
sed -i 's/##BUILD##/$(Build.BuildId)/g' docker-compose.yaml-task: SSH@0
inputs:sshEndpoint:'SSH-END-POINT-NAME-FROM-SERVICE-CONNECTIONS'runOptions:'inline'inline:|
docker-compose up -d 2> docker-compose.log
cat docker-compose.log
Let's break down the above into small parts and explain what was going on there. In the first stage Build_docker_containers there are two tasks: build image (it's actually three actions in one: build, tag and push) and publish pipeline artifact. Use this task in a pipeline to publish artifacts for the Azure Pipeline (note that publishing is NOT supported in release pipelines. It is supported in multi stage pipelines, build pipelines, and yaml pipelines). AZURE-CONTAINER-REGISTRY-NAME and AZURE-CONTAINER-REGISTRY-REPOSITORY-NAME both have to be changed accordingly. Note that a built image gets tag which is by default built-in Build.BuildIdpredefined variable which helps me properly roll out my app update on the second stage.
For the sake of simplicity this pipeline (stages) has been simplified.
The second stage is Deploy_to_production and it rolls out built image to my production server. All it's tasks are based on SSH deployment tasks. The SSH endpoint has to be configured first in service endpoints (there used to be some issues in new service endpoint experience with >2048 public keys in 2019, but Microsoft team has fixed this).
sed -i 's/##BUILD##/$(Build.BuildId)/g' docker-compose.yaml replaces build number, so docker-compose up -d 2> docker-compose.log brings something to update in docker-machine.
Docker and Docker Compose
I use docker for packaging an app and docker registry (Azure Container Registry). There is no problem to use dockerhub, just a corresponding endpoint has to be present in service endpoints. Compose helps me to combine multiple containers and define the logic between them and some other objects, as well as their behavior.
Docker compose for Azure DevOps
The following docker-compose.yaml is in my project.
Let's break down the above into small parts and explain what was going on there. There are two services (of course there are more, but for the sake of simplicity of this exercise there are only two). AZURE-CONTAINER-REGISTRY-NAME.azurecr.io/AZURE-CONTAINER-REGISTRY-REPOSITORY-NAME:##BUILD## has to be changes slightly except ##BUILD## which is being changed every pipeline execution.
Summary
Any feature or bug fix can be delivered to production in less than 6 minutes.
With 1800 free minutes of pipeline per month and 6 minutes of all my stages total duration I can do 300 cycles building and releasing my software.
Building a user interface is one of the most important aspects of any product development. It can make or break the customer base irrespective of how strong the application functionality is. React is a Javascript library maintained by Facebook, individual developers, and companies. And even though it is not a complete framework like Angular but a library, it is popular because of the declarative, efficient, and flexible approach toward building user interfaces.
Building an application in React is fairly simple. While developing a web application most of the developers use Create React App CLI. The React App is a stable single-page app. This means that once you start creating a build, it gets created for a specific environment and continues to exist in the same environment. For a React web application, you will have two default environments viz. development and production.
Given below is the script from package.json that you can use for a web application in React App:
"scripts": {
"build-css": "node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/",
"watch-css": "npm run build-css && node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/ --watch --recursive",
"start-js": "react-scripts start",
"start": "npm-run-all -p watch-css start -js", // For development environment.
"build": "npm run build-css && react-scripts build", // For production environment.
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject",
}
For a web application, you can easily access the available scripts for the local host and also create the production build. In the above case, when you run script commands like “npm start” it uses .env or .env.development and “npm build” uses .env.production at the root level. If your application has two environments viz. development and production then I would suggest using the default scripts. Just imagine if you have multiple environments like staging and QA (quality assurance) for the configuration then what would you do? As react-scripts only support development and production, the solution is to create the .env.staging, .env.poc or .env.qa, but it won’t work in the similar way to .env.development or .env.production. It is important to do a few changes in the configuration, in package.json and in the commands for running the application.
Now, let’s see how to manage the multiple environments in the application with the following prerequisites:
Web Application should be created using the create react app CLI.
Install env-cmd npm: Either use the command npm install env-cmd or npm install -g env-cmd.
Different scenarios in React App have been listed below, let’s execute them one by one:
To create environment files with the required configuration: First, create the required environment files for the application. Suppose, your project has four environments like development, production, staging, and QA. Then you need to create 4 environment files as mentioned below:
env.development
env.production
env.staging
env.qa
To add the configuration(contents) to the environment files: Suppose, you have a different API URL for each environment. You can add the configuration in the particular .env file. Configuration here means any environment variable that can be used globally within the application. Below are some examples for various environments with the variables:
In the above code, the highlighted part in blue color is used to build the environment-specific build. To build an environment-specific build use the env-cmd along with the environment file and run-script.
Running the application in different environments: To run the application in multiple environments choose one of the environment-specific commands from the following:
1] npm run start:development
2] npm run build:staging
3] npm run build:qa
4] npm run build:production
Access the variables in-app For accessing the variables in the .env file you should use the process. env which is a global variable. This is injected by the Node during runtime for the application’s use. It represents the state of the variables in the application’s environment at the time of its start.
process.env.REACT_APP_API_URL
Thus, it can be inferred that the entire process is not as complicated as it looks and can be expanded to multiple environments. In this blog, I focussed on setting up your React app for the environments that are beyond the default environments of development and production. Comment below how useful it turned out for your React projects, till then Happy Coding!
Before we start, let’s imagine the situation where you are talking with your colleagues after a good weekend in your office kitchen and the manager runs into the room shouting ‘BUG, WE HAVE A BUG!”.
‘Bugs are everywhere’ – you think, trying to fill the water bank on your office’s coffee machine, but the situation changes since the manager continues with ‘WE HAVE A BUG IN PRODUCTION!’.
So, what will you do if your team misses some bug in production? What will you do if your manager declares the Bug Hunting as a primary goal for your team in the name of the Client? What will you do, if once you come into the office and see the new poster ‘WANTED Bug Bounty Program’ on your team dashboard?
Well, the last question seems to be off the topic, but we’ll try to answer at least the other ones about quality processes and how we can save a lot of money without hiring dozens of engineers for manual support.
The answer we suggest is – Automation, providing some coverage for your Project with Unit, API, UI Tests allows to ensure that important functionality doesn’t break after the regular code changes. Because in this way you will save the time or/and money in one or another way, that’s why we discuss how can we use pre-scripted tests on an application.
Furthermore, on this post, you will find a step-by-step instruction for implementing the Automation processes into your CI/CD and running the tests in your Azure DevOps Pipeline.
Defining the chapters, you will find how to:
Set Up a Demo Project for our Tests;
Run the Tests for CI Pipeline [is the first way to run automated tests in a project, and a first barrier of your QA defense which contain Unit Testing coverage];
Run any Tests for CD Pipeline [if the second way, which contains UI or/and API Testing coverage];
Run your Tests from Test Plans on Demand [is the third, and a final way, which gives possibility to run any test you want whenever you are].
Having a little remark, I believe it’s important to say, that Setup Chapter for a Demo Project is made with the idea to create some basis on which we will add our Automated Tests, so you can use your own project instead or download already created project with the following github link:
Now we are ready to start, so let’s the battle begin!
Chapter I – Setup a Demo Project for our Tests
The idea for this chapter is to create a Demo Project for the further needs, as the last warning - if you have your own project, you can continue with the Chapter II, implementing the testing coverage. Otherwise, welcome to the Chapter instructions:
Open your console or terminal;
2. Create a folder for your solution:
mkdir TechFabricSln
3. Create folders for Main and Test Projects
mkdir src
mkdir test
4. Create a Project in the ‘src’ folder & build the Project
cd src
dotnet new webapp -n TechFabricSln
cd TechFabricSln
dotnet build
5. Create a Test Project in the ‘test’ folder & build the Project
cd..
cd..
cd test
dotnet new nunit -n TechFabricSln.Test
cd TechFabr*
dotnet build
At the end, you will have some structure like this:
The only difference is the name of the Project, the diagram shows you the example of how it can be used. To make an analogue for your project – replace the ‘TechFabric’ as proposed in graph or ‘TechFabricSln’ as in the example with your Project’s name.
So now, lets add a simple function to our project, which we can cover and test with our unit-tests:
Add an additional class for the Main ‘TechFabricSln’ Project, in our example we will add ‘Bought’ class:
2. Write some functions which you want to test. The following code defines a new bool variable which verifies who bought something in our shop.
3. Now, let's create a quick check for declared variable. To do this, create an additional [Test] in our Test Project, and write down the code which verifies something you would like to test. In the example we verify ‘Bought’ class with a ‘isBoughtBy’ method:
As a final step, all that you need to setup is to create a GIT repository, and push your code. Be a team member and track yor changes in source control tool, but not on your local machine!
Well, the instructions are done, so now is the time to run your Unit tests and verify if everything good or not.
If you see the green lines, you can be pretty sure, that you made the first step in protecting your project from uninvited guests.
The following step is to go to the main parts of our topic, and create your first Azure DevOps Pipeline.
Chapter II – Create a build and run the UNIT Tests on Continuous Integration
Performing an automated testing as a part of building pipeline is a good way of verifying unexpected problems before pushing the build on some Test or Customer environments.
In this chapter you will find how to create a build procedure which will include your test runs and made a good basic fundament for the further quality assurance processes.
No more talks, lets start with creating your first Build pipeline:
Login in Azure DevOps;
Go to ‘Pipelines’ -> ‘Builds’;
Press on ‘New pipeline’:
4. Connect to your source control tool
5. Select your Project’s repository
6. Select ASP.NET Core Template:
7. Add the following command to run the Tests from Test Projectinto the yml file:
-task: DotNetCoreCLI@2
inputs:command: test
projects:'**/*Test/*.csproj'arguments:'--configuration $(buildConfiguration)'
In fact Azure DevOps will read configuration file, and execute the steps as described.
8. Press ‘Save and Run’ button, with ‘commit directly to the master branch’.
After the build procedure is done, we can see the builds results in Logs. Especially, we are interested in our ‘DotNetCoreCLI’ command, so let’s check it.
You can see, that in fact, nothing more than ‘dotnet test’ command is used in our DotNetCoreCLI task.
Detailed information about our Test Run for the build can be found in ‘Tests’ block of the build:
Also, you can check detailed report about all your test runs in Test Plan -> Runs
Now, your Build Pipeline contains at least one Unit test, which works every time you run the Build, and verifying if are there any unexpected changes or not.
And from this point, we can move to the next chapter, and Create Continuous Delivery Pipeline, which includes some UI Automated Tests on Selenium.
Chapter III – Run UI Selenium Tests in Continuous Delivery Pipeline
As soon as the build is done and Unit – Tests are passed on the Build Workflow, it’s a common thing for a quality control or development teams to create and support some functional (UI) tests in release workflow after the app is deployed to some test environment.
That’s why in this chapter, we will implement some UI Tests based on a Selenium framework [Selenium is an open source test framework for web applications which supports any popular browser and can be run on almost every operating system], using the UI Test framework additionally to your Unit or API tests you will be able to detect any changes also on front-end of your application.
Deploy your WEB App
To run Selenium Test for your own project, you need to Deploy it’s in continuous deployment (CD) release pipeline and Publish it on Azure, we won’t spend time on this in the post, and if you’re interested in - you can find the details here: Publish Web App to Azure or write us a message, and we’ll create a separate article with detailed description.
In this chapter, we will run the tests against Mircrosoft.com, and you will find how to:
Create UI Test using Selenium Test Framework;
Create a Release Pipeline using your CI from the Chapter II;
Run created UI Tests on Azure DevOps CD;
Publish and monitor Test Results.
ADD SELENIUM TO THE TEST PROJECT
Till this moment, we already have continuous integration (CI) build pipeline with running Unit Tests, so all we need is to add Selenium References, Driver and Test for existing Project. To do this:
Open the Solution in IDE;
Go to the .Test Project -> Manage NuGet Packages;
Add additional Packages to your .Test Project:
Selenium.WebDriver.ChromeDriver;
Selenium.Support;
Selenium.WebDriver;
Microsoft.TestPlatform.TestHost;
The list of installed packages at this moment is:
4. Add new Class for Selenium Tests ‘SeleniumTest.cs’ to the Project
5. Add Selenium Test in SeleniumTest.cs’ class.
As an example, we will add the Test which finds the Microsoft Page, and verifies which page contains the ‘Windows’ menu. The code is:
We need this option to get chromedriver into the artefacts after the project is published.
7. Run Selenium Test locally to check if it works.
MAKE CHANGES INTO BUILD PIPELINE FOR UI TESTS
In some cases on the road to hell automation you can face with obstacles in verifying the corrects versions of driver & packages, version for NuGet, Builder and/or .Net core.
That’s why we provide some hints in the form of code pieceswhich you can use in your Build procedure to avoid the obstacles you faced with:
As soon as the preparation is done, and the build is ended successfully, check the ‘chromedriver.exe’ located in artifacts of your build. It’s an important thing because without the driver, you won’t be able to complete written UI tests successfully.
Then make changes into your Release Pipeline or create a new one:
Open the Releases page in the Azure Pipelines section
Click on ‘New pipeline’ button in Releases block
3. In opened Templates block click to start with ‘Empty job’
4. Name the stage and click on the Job/Task link
5. Add the Dotnet Core Task:
Complete the added task with the following parameters:
Task Version: 2;
Display Name: Choose any name you want, the field is just responsible for the name of procedure;
Command: There are several command we can find, but the one we need is ‘custom’, we’ll specify the command father in ‘Custom command’ section;
Path to project(s): Specify the path to your Test Project’s dll, in our case it’s:
Add the arguments you want to use. To set logs you can use:
--logger:trx;logfilename=TEST.xml
To run only UI tests on this Release, add:
/TestCaseFilter:"TestCategory=UITests"
Also, it’s important to say, that in case one or more tests fail, the procedure will be stopped, to prevent this, we recommend to use ‘continue on error’ option.
6. Add the ‘Publish Test Results’ Task
Complete the added task with the following parameters:
Task Version: 2;
Display Name: Choose any name you want, the field is just responsible for the name of the procedure;
Test result format: Format of test result files generated by your choice of test runner. In our case we need VSTest option;
Search Folder: Specify the folder path where to search for the test result files.
Now we are ready for our first release, save the Release Pipeline and start a new release. You can do this by queuing a new CI build, or by choosing Create release from the Release drop-down list in the release pipeline. Check results you can do in 2 ways and they are the same as in the Build Pipeline:
Visit the Test Logs - > Tests block;
Visit the Test Plan - > Runs.
CHAPTER IV – Run your Tests from Test Plans on Demand
The idea of this chapter is to show how to use Azure Test Plan to create linkages between manual and automated tests. Rather than using scheduled tests, running the tests on demand can be useful if you:
Don’t want to run all tests on build or release stages (e.g. if you want to save the time and run only some important tests on Release, and afterwards run whatever you want on demand);
The changes in your environment are made not by release (e.g. changes in DB);
To rerun some individual tests (e.g. the tests can fail on build/release stage because of some infrastructure issues, and you just want to rerun them);
To run the tests on a new build without it’s releasing.
You already have almost everything we need, so in this chapter we proceed with:
Create a Test Plan with a Manual TC to linkage with our Automated TC;
Create separate Release Pipeline for Test Project for running the Tests on Demand;
Create linkage between Manual and Automated TCses
Go to the Test Plans and push on ‘New Test Plan’ button;
2. Create a Test Plan with any Name and Area;
3. Go to Test Plan – ‘Define’ block and push on ‘New Test Case’ button:
4. Write some TC Check, no matter which step is it.
5. Open your Project in IDE;
6. Connect into the Team Services/Azure;
7. Add linkage with right click on the test -> Assosiate to Test Case, and add a TC by ID
Create separate Release Pipeline for Test Project
For management and moderation, it’s much easier to create a different stage with procedure, then add commands to the existed one. So, from this moment, we add additional stage named DEV Test, which we will use for running the tests on Demand, and not to increase the number of commands for our usual Web Project Pipeline as it’s usually not the easiest one.
To Add new Stage, go into All Pipelines and select the one you use for CD of the Project, and start with an Empty Template as we’ve done int the previous Chapter:
Now we need to define 3 tasks:
VSTest Platform Installer: You need the Visual Studio Test Platform to be installed on the agent computer, and if it’s not - you must add the Visual Studio Test Platform Installer task to the pipeline definition;
VSTest Task: This command allows us to use vstest run command with specifying settings and parameters for our Test Run;
Publish Test Results: This task allows us to collect the data from test runs to make some statistical analysis and verify the ‘weakest’ areas of the App.
Add all 3 described tasks with the following settings:
VSTest Platform Installer task settings:
2. VSTest Task settings:
3. Use the analogue for settings of Publish Test Results command from your Release Pipeline, which has been described in Chapter III.
The common view in your CD after the all work is done, will look like this:
Now we can run the Tests from Test Plans chapter, so let’s try to do it:
Go to Test Plans;
Select ‘Execute’ block;
Select a TC you want to run;
Choose ‘Run with options’ in the menu ‘Run for web applications’:
5. Choose the build and Stage where you want to run your UI Selenium tests. Use the latest build, and choose ‘Dev Tests’ Stage which we just created.
6. Push ‘Run’ button, and let's wait for result. The system will check if only automated tests are selected, validate the stage to ensure the VS Test task is present and has valid settings. After the validation it creates a test run, and then triggers the creation of a release to the selected stage. All these processes you can see as soon as you initialize the ‘Run’:
7. After the Test execution is complete, visit the Runs page and check for the test Results. The Test results page lists the results for each test in the test run, and if everything goes good you will find a prize from Azure saying that there are no test failures.
Summary
Setup a WEB Project with Unit Tests;
Create a CI in Azure DevOps;
Add Selenium Tests on existing Projects; - Run all the tests locally, in your own IDE.
Run the Tests in 3 Ways on Aure DevOps: - On each Build (for Unit Tests); - On each Release (for UI or API tests); - Create linkages between your manual and automated tests and run them whenever you want.
Check for Test run results.
QA or Developer Teams usually cover Testing and Staged Servers with Autotests, to check for new features, complete regression testing, and to test builds and updates to ensure quality under a production-like environment.
The advantage is once the automated tests created, they can easily be provided in several of your Test Environments.
It’s a good practice for bigger projects so save the money and time for verifying software in this way before the users find something.
And returning to the start, using the automation properly, and covering your code with checks, you will incredibly decrease the number of heart attacks you can have because the customers found the bug in production.