Sunday, August 9, 2020

Create Your Application with Docker

 



Develop your ASP.NET Core application using Docker

Now that we’ve started an existent .NET Core application with Docker, it's time to create our image based on our code. Assuming that you’re already familiar with .NET Core basics, we will now concentrate on how to create a development image to execute our application and sync the source code with the container file system.

If you are new to the .NET Core CLI, basically you need to know these commands:

  • dotnet new: Creates a new project from the specified template. If you want to create an MVC application, you can use dotnet new mvc.
  • dotnet restore: Restores all the NuGet dependencies of our application.
  • dotnet build: Builds our project.
  • dotnet run: Executes our project.
  • dotnet watch run: Runs our project and watches for file changes to rebuild and re-run it.
  • dotnet publish: Creates a deployable build (the .dll) of our project.

You can work on Windows or Mac with your favorite editor/IDE. For our examples, we will use a Mac with Visual Studio Code, but all the instructions are also available for Windows (you will find the corresponding Windows command in parenthesis if it's different from the Mac command).

You can download and install both Visual Studio Code and .NET Core for free from the official websites:

The first step is the creation of a new web application using .NET Core. With the .NET CLI (Command Line Interface), we can add the project files of a new web project to a folder using the following commands in the terminal window.

Code Listing 16: Create a new ASP.NET Core application

mkdir -p myapp/frontend (for Windows: mkdir myapp\frontend)

cd myapp/frontend

donet new mvc

To ensure that everything is working fine, you can execute the run command.

Code Listing 17: Command to run a .NET Core application

dotnet run

You will display the following result.

ASP.NET Core application running output

Figure 7: ASP.NET Core application running output

Perfect! Press Ctrl+C to interrupt the execution and open the project with Visual Studio Code (you can type code . in the terminal window to open a new instance of the editor from the current folder). To create a Docker image, we need to describe, in a file named Dockerfile, all the operations that we want to execute to obtain the final result. Now, create a new file in the current folder and name it Dockerfile, and let's go script the needed operations.

We can now install the Visual Studio Code extension named Docker by Microsoft, which help us to edit the Docker files. This plug-in adds syntax, highlighting, and complete code in all Docker configuration files, including Dockerfile.

The creation of a new image starts from an existent base image, which has been chosen depending on what you want to do. In our example, we need to compile a .NET Core application, so we need an image that contains the .NET Core SDK. Microsoft provides the image microsoft/dotnet with multiple versions that we can select by the tag. For our aim, the best image is microsoft/dotnet:sdk, which contains all the libraries and tools to build a .NET Core application. The first command of our script is the following.

Code Listing 18: Dockerfile FROM instruction

FROM microsoft/dotnet:sdk

The keyword FROM instructs the Docker daemon to use the specified image to create a new image. If the image is not present locally, it is downloaded. Now we can execute all the operations we need to copy and build our application. As a preliminary step, we can create a new directory in the new image with the command RUN.

Code Listing 19: Dockerfile RUN instruction

RUN mkdir app

You can use RUN for all the operations that you want to execute on the file system during the new image creation. You can specify a working directory on the container to execute all forward commands based on a specific directory. In our case, we can use the directory just created.

Code Listing 20: Dockerfile WORKDIR instruction

WORKDIR /app

From this moment on, all commands are executed from the folder app. Now we can copy our source code under the image, using the COPY command.

Code Listing 21: Dockerfile COPY instruction

COPY . .

The COPY command requires us to specify source and target so that, with the specified command, we are copying all the content of the current directory in the app directory of the new image. Now we need to restore all project dependencies in the image.

Code Listing 22: Execute .NET Core dependencies restore from a Dockerfile

RUN dotnet restore

Finally, when a container is created and running, we want to execute our code. Instead of RUN, which executes a command on the image, we need the CMD command, which is executed as the container starts.

Code Listing 23: Dockerfile CMD instruction

CMD dotnet watch run

We are ready to create our image, but we need an optimization first. The COPY command copies all the contents of the current folder, including the bin and obj folders. We can exclude these folders using a Dockerignore file. If you are familiar with GIT, the concept is the same: exclude some folders from the elaboration. You can then create a file named .dockerignore and add the following rows.

Code Listing 24: Dockerignore file content for .NET Core application

bin/

obj/

Now when we create the image, the bin and obj folders will be excluded from the copy. It's time to create our image. The command to execute is in Code Listing 25.

Code Listing 25: Command to build a Docker image

docker build -t frontend .

The result is similar to the following.

Docker build command output

Figure 8: Docker build command output

If you examine the terminal output, you can see that the command creates our image in six steps, one for each Dockerfile instruction. After each step, you can see the log Removing intermediate container, which reminds us that an image is built from a base image that creates other layers over it. So, after each Dockerfile instruction, a new container is constructed, and the instruction is executed on it and immediately removed. You can see the new image executing the command docker images in Figure 9.

The new Docker image created

Figure 9: The new Docker image created

Since we have not specified a tag, latest is attributed to our image. If now you execute the docker run -p 5000:5000 -p 5001:5001 -t frontend command, a new container is created and run. The option -t allocates a pseudo-TTY. In our case, it is useful to enable Ctrl+C to stop the Kestrel listening.

Docker container creation from the new image

Figure 10: Docker container creation from the new image

However, if you open the browser and navigate to localhost:5000, the application is not reachable. To understand why this happens, you have to know how Docker networking works. Without entering into this complex topic, you can host the ASP.NET Core application on the 0.0.0.0 IP address, which permits you to receive the requests from any IPv4-host at all. There are several ways to do this. In the launchSettings.json, located in the Properties folders, you can change the applicationUrl property from localhost to 0.0.0.0.

Code Listing 26: ApplicationUrl configuration from launchSettings.json

"frontend": {

      "commandName": "Project",

      "launchBrowser"true,

      "applicationUrl": "https://0.0.0.0:5001;http://0.0.0.0:5000",

      "environmentVariables": {

        "ASPNETCORE_ENVIRONMENT""Development"

      }

 }

As an alternative, you can add the .UseUrls() configuration method before the .UseStartup() in the CreateWebHostBuilder method, located in the Program.cs file.

Code Listing 27: UseUrls methods in web host creation

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>

            WebHost.CreateDefaultBuilder(args)

                .UseUrls("http://0.0.0.0:5000;https://0.0.0.0:5001")

                .UseStartup<Startup>();

My preferred option is the use of the --urls option in the run command, because it moves this fix to the Docker container execution that causes the drawback. Let's modify the last instruction of our Dockerfile in this way.

Code Listing 28: The option --urls of the command dotnet run

CMD dotnet watch run --urls "http://0.0.0.0:5000;https://0.0.0.0:5001"

Now, rebuild the image with the same command used previously. As you can see, the build process is faster than the first time because, during the build, Docker caches the operations and reuses the result to improve the performance. If you don’t use the cache, it’s possible to specify the --no-cache option. When running the container from this new image, it all works fine.

Our web application executed from Docker

Figure 11: Our web application executed from Docker

We want to use this image to develop our application, so if we edit the source files, we would like for the container to rebuild and re-execute the modified code. The watch parameter of the dotnet watch run command does precisely this, but if we execute the code in our container, it watches only the file copied in the container.

To solve this problem, we have to create a Docker volume that permits us to share a folder between the host machine and the container. Not all the files of our project should be shared with the container, such as the bin and obj folders. You can specify the dotnet command to generate these folders in a specific location, but I prefer to move all the files that I want to share with the container into a specified folder. So, in my frontend project, I create an src folder and move into it the ControllersModelsViews, and wwwroot folders, together with the Program.cs and Startup.cs files.

Application reorganization structure

Figure 12: Application reorganization structure

If your ASP.NET Core application serves views and static files, you need to specify that the Views and wwwroot folders are now in the src folder. The fastest way is to change the content root of the application in the Program.cs.

Code Listing 29: Set a custom content root when creating web host

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>

      WebHost.CreateDefaultBuilder(args)

     .UseContentRoot(

          Path.Combine(Directory.GetCurrentDirectory(), "src"))

     .UseStartup<Startup>();

This change is not necessary if your application only serves API. Now we are ready to rebuild our development image (execute the docker build command again) and run the container as follows.

Code Listing 30: Docker container run with volume

docker run -p:5000:5000 -p:5001:5001 -v $(pwd)/src:/app/src -t frontend

The option -v creates a shared volume between the local src folder and the src folder in the container. The $(pwd) instruction returns the current path, which we used to create the needed absolute source path. The destination path must be absolute (/app/src, not just /src, even if the app is the current working directory). Now if you change something in the local src folder, the same changes are available in the container, which is very useful during development!

Add containers to your project

Now we are ready to add other containers to our project. For example, we can add a database like SQL Server or MongoDB to save our data, or add a cache service like Redis to improve the performance of the read data operations. We can add advanced search functionality with an Elasticsearch container and decouple communication using a message broker like RabbitMQ. We have endless possibilities, thanks to Docker.

Is this too complicated? Let's take it one step at a time. Suppose we want to add SQL Server to our project. The good news is that SQL Server is also available for Linux, starting from the 2017 edition, so we can use it with both the Windows and Linux containers. We can download the correct image using the following command.

Code Listing 31: SQL Server Docker image pull

docker pull mcr.microsoft.com/mssql/server

To run this image, we need to specify some parameters to configure the SQL Server instance correctly:

  • ACCEPT_EULA=Y: Accept the end-user license agreement.
  • SA_PASSWORD=<your_strong_password>: Specify the password for SA user that must be at least eight characters including uppercase letters, lowercase letters, base-10 digits, and/or non-alphanumeric symbols.
  • MSSQL_PID=<your_product_id | edition_name>: Specify which edition we want to use; the default value is the Developer edition.

To specify an argument with the docker run command, you can use the -e option. We can add a custom name to the container to simplify the connection of the application with the database. To do this, we can use the option --name, like in the following command.

Code Listing 32: SQL Server Docker container run

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password_123' -e 'MSSQL_PID=Express' -p 1433:1433 --name sqlserver mcr.microsoft.com/mssql/server

Now we have a SQL Server container named sqlserver, which exposes an instance of SQL Server Express Edition on port 1433. We can connect to this container with the user SA using the password Password_123.

To use SQL Server with the ASP.NET Core application, we need to connect our development container to the sqlserver container. To do this, we can use the option –link.

Code Listing 33: Front-end Docker container run with the link to the database container

docker run -p:5000:5000 -p:5001:5001 -v $(pwd)/src:/app/src -t --link sqlserver --name frontend  frontend

Now you can use sqlserver as a server in the connection string block of the appsettings configuration file.

Code Listing 34: SQL Server connection string

"ConnectionStrings": {

    "MyAppDbContext""Server=sqlserver; Database=myapp; User=sa;   Password=Password_123; MultipleActiveResultSets=true"

}

We can use this connection string with Entity Framework Core and add the following rows to the ConfigureServices method of the Startup class.

Code Listing 35: Entity Framework Core configuration

services.AddDbContext<MyAppDbContext>(o =>

    o.UseSqlServer(Configuration.GetConnectionString("MyAppDbContext"));

For our example, we can add a basic Customer class to the model, with some properties and their constraints, using data annotations.

Code Listing 36: The Customer class of the application model

public class Customer

{

        public int Id { getset; }

        [Required]

        [StringLength(200)]

        public string Name { getset; }

        [Required]

        [StringLength(16)]

        public string VAT { getset; }       

        public bool Enabled { getset; }

}

The Entity Framework DbContext, in our case the MyAppDbContext, is very simple. We use the OnModelCreating to seed the database with some sample data.

Code Listing 37: The application DbContext

public class MyAppDbContext: DbContext

{

        public MyAppDbContext(DbContextOptions<MyAppDbContext> options)

          base(options) { }

        public DbSet<Customer> Customers { getset; }

        protected override void OnModelCreating(

             ModelBuilder modelBuilder)

        {

            modelBuilder.Entity<Customer>().HasData(

                new Customer()

                {

                    Id = 1,

                    Name = "Microsoft",

                    VAT = "IE8256796U",

                    Enabled = true

                },

                new Customer()

                {

                    Id = 2,

                    Name = "Google",

                    VAT = "IE6388047V",

                    Enabled = false

                });

        }

}

Yes, it sounds good to have these customers. As you probably already know, the initialization of the database with Entity Framework requires the generation of a migration model, which you can scaffold from the code executed with the following command from the terminal, in the project root.

Code Listing 38: Command to generate Entity Framework initial migration

dotnet ef migrations add InitialCreate

This command creates a Migrations folder that you have to move into the src folder: in this way, it will be copied in the container. Now you need to connect to the bash of the frontend container using the docker exec command.

Code Listing 39: Docker command to connect with the container bash shell

docker exec -it frontend bash

Now you are in the frontend container linked to the sqlserver container, so you can generate the database from the migration by using the following command from the app folder.

Code Listing 40: Entity Framework command to update the target database

dotnet ef database update

If everything works fine, you’ll see the following result.

Database update command output

Figure 13: Database update command output

Now you only need to add a CustomersController to our project to test the communication with the database.

Code Listing 41: Application MVC customer’s controller

public class CustomersController : Controller

{

        private readonly MyAppDbContext context = null;

        public CustomersController(MyAppDbContext context)

        {

            this.context = context;

        }

        public IActionResult Index()

        {

            var customers = this.context.Customers.ToList();

            return View(customers);            

        }

}

Add an Index.cshtml to the Views/Customers folder to show the customer list.

Code Listing 42: Application customer’s list view

@model IEnumerable<frontend.src.Models.Data.Customer>;

<table class="table">

    <tr>

        <th>Id</th>

        <th>Name</th>

        <th>VAT</th>

        <th>Enabled</th>

    </tr>

    @foreach (var item in Model)

    {

        <tr>

            <td>@item.Id</td>

            <td>@item.Name</td>

            <td>@item.VAT</td>

            <td>@item.Enabled</td>

        </tr>

    }

</table>

Finally, this is the result of all our efforts.

Application customer’s list output

Figure 14: Application customer’s list output

It's time to add some optimizations and some considerations to this configuration. The first observation is about the option --link. It is straightforward and works fine in our scenario, but its use is deprecated because of its limitations. With the --link option, Docker set up environment variables in the target container to correctly set the IP address and port numbers, so if I need to change something in the linked container (for example, to make an update needed to have a new image version), all my links break. Another big problem with the linking is that links are deleted when you delete the container.

The right solution to connect containers is the creation of a network with the command docker network create. In our case, we can execute the following command.

Code Listing 43: Docker command to create a network

docker network create myapp-network

Now we can add all the containers we want to this network simply using the option --net. In our case, we can run the containers as follows.

Code Listing 44: Docker commands to run containers with an existing network

- docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password_123' -e 'MSSQL_PID=Express' -p 1433:1433 --name sqlserver --net myapp-network mcr.microsoft.com/mssql/server

- docker run -p:5000:5000 -p:5001:5001 -v $(pwd)/src:/app/src -t --link sqlserver --name frontend --net myapp-network frontend

If you are lazy like me, you probably won't execute the database update command from the container shell every time you run the frontend container. The best way to automate this task is to create a startup script and execute it when the container starts, instead of the used CMD.

First, create a file in the frontend folder named startup.sh and insert in it the two commands to execute as the container starts.

Code Listing 45: Startup script for the front-end entry point

dotnet ef database update

dotnet watch run --urls "http://0.0.0.0:5000;https://0.0.0.0:5001"

We need to add the executing permission to the script; otherwise, we will receive a permission-denied error.

Code Listing 46: Command to add execution permission to the startup script

chmod +x startup.sh

You can also add this command in the Dockerfile with a RUN command if you want, but it is not necessary. Now we can change the Dockerfile as follows.

Code Listing 47: Dockerfile with ENTRYPOINT instruction

FROM microsoft/dotnet:sdk

RUN mkdir app

WORKDIR /app

COPY . .

ENTRYPOINT ./startup.sh

Note that we use the instruction ENTRYPOINT instead of CMD. You can see the last CMD for comparison; the symbol # indicates a comment, so the CMD row will not be executed. This instruction is the best practice when you want to start a container with a script, and when you want to use a container as an executable. If you rebuild the image (docker build -t frontend .) and recreate the container, you can watch the result in your terminal.

There is another important consideration for the SQL Server container. If you connect with the shell (docker exec -it sqlserver bash) and navigate to the var/opt/mssql/data folder, you can see the databases connected with the SQL Server instance.

SQL Server database files

Figure 15: SQL Server database files

Obviously, if you delete the container, your data will be lost. For development purposes, this isn’t a problem, but you can solve this problem by creating a volume of this folder or attaching external files to the instance. Instead, if you are in production, you must save these files in a separate volume, but we will address this topic when we talk about Kubernetes.

Try to imagine a scenario with more than two containers: executing the commands to run all containers in the right order, with all the necessary parameters, can be very tedious. You also need to remember all this information or read it in the project documentation. The good news is that Docker provides a fantastic tool for running our container ecosystem. The tool is Docker Compose—let’s look at how to use it to improve our daily work.

Run your container with Docker Compose

Docker Compose is a tool that can execute containers from a YAML configuration file. YAML is a format designed to be simple to understand by humans, like XML, but with minimum effort on learning syntax.

To understand the simplicity of the YAML format, let’s create a file named docker-compose.yaml in the myapp folder. The first row of the script declares the version of the Compose configuration file that we want to use; version 3 is the most recent.

Code Listing 48: Docker Compose version

version'3'

Now we need to declare the services that compose the ecosystem, in our case only two: the front end and the database. In YAML, each configuration block is delimited through indentation; to define two services, frontend and sqlserver, you need to use the following block.

Code Listing 49: Docker Compose services

services:

     frontend:

     sqlserver:

For each service, we can specify the image or the Dockerfile path for the creation of the image, the ports to map to the container, and eventually, the volumes and environments variables. You can also specify that a service depends on another service, so that Docker Compose can run the containers in the right order. For example, the configuration for our frontend is the following.

Code Listing 50: Docker Compose front end configuration

frontend:

    build./frontend

    ports:

      - 5000:5000

      - 5001:5001

    volumes:

      - ./frontend/src:/app/src

    depends_on:

      - sqlserver

We are describing to the composer how to create a service from an image that it has to build from the Dockerfile (omitted because it is the standard name) in the folder ./frontend. When the image is built, it can run the container exposing the ports 5000 and 5001, and create a volume between the ./frontend/src host folder and the /app/src container folder. We specify that this container depends on the service sqlserver, so the system has to start the sqlserver service first, and the frontend afterward.

The sqlserver service uses the image mcr.microsoft.com/mssql/server, exposes the port 1433, and sets the environment variables to configure SQL Server.

Code Listing 51: Docker Compose database configuration

sqlserver:

    imagemcr.microsoft.com/mssql/server

    ports:

      - 1433:1433

    environment:

      ACCEPT_EULA"Y"

      SA_PASSWORD: "Password_123"

      MSSQL_PID"Express"

Now we are ready to compose our application using Docker Compose. From the terminal, in the myapp folder, execute the following command.

Code Listing 52: Docker Compose execution command

docker-compose up

The first time you execute this command, you can observe some preliminary operations that will be cached to improve the performance of the future command execution. The first one is the front-end image creation.

Docker Compose execution output

Figure 16: Docker Compose execution output

In our case, the image will be tagged as myapp_frontend:latest. Just after this operation, the composer creates the container myapp_sqlserver_1 and myapp_frontend_1 and attaches the database to the frontend through the creation of a default network named myapp_default.

Docker Compose containers creation

Figure 17: Docker Compose containers creation

When the myapp_sqlserver_1 container starts up, the entry point of the frontend is executed, so Entity Framework creates the database and fills the sample data in it. Once this task is complete, the frontend starts.

Docker Compose front-end container execution output

Figure 18: Docker Compose front-end container execution output

If you navigate to the address https://localhost:5001/Customers, you can see the same result of the manual execution of our application. Very convenient, don’t you think? Now, if you want to interrupt the application execution, you have only to press Ctrl+C. The execution interruption doesn't delete the containers, so if you want to restart the application, you can re-execute the docker-compose up command.

If you are not interested in the interactive mode, you can also execute the command docker-compose start.

docker-compose start command output

Figure 19: docker-compose start command output

In this case, you can stop the execution with the command docker-compose stop.

docker-compose stop command output

Figure 20: docker-compose stop command output

If you want to delete the created containers (not the images), you can use the command docker-compose down. In this case, the default network created for the communication between the services will also be deleted.

docker-compose down command output

Figure 21: docker-compose down command output

Now that our development environment is configured and convenient to use, let's get ready for the next stage: preparing the images for deployment.

Create the final image for publication

When your code is ready for deployment, you can prepare the image for publication on the target registry. Docker Hub is one example, but you can also create your own registry or use a private registry. Azure, for example, provides the Azure Container Registry service to publish your images.

The final image doesn't run the application as a development image (dotnet watch run), but we need to publish the application with the command dotnet publish, probably without debugging support files, and in a specific directory. In our case, the right command is the following.

Code Listing 53: Command to publish a .NET Core application

dotnet publish -c Release -o out

Therefore, we need to proceed in two steps: publish our application and then execute the publication result. The publish command creates a DLL library with the name of the project (frontend.dll), so the right command to execute the frontend is the following.

Code Listing 54: Command to execute a published .NET Core application

dotnet frontend.dll

As you might recall, to create the image for development, we started from the image microsoft/dotnet:sdk, which contains the whole .NET software development kit needed for the compilation. The execution of a published application doesn't require all the SDK; you only need the ASP.NET Core runtime, available with the image microsoft/dotnet:aspnetcore-runtime.

Remember, we need to write a specific Dockerfile for the final image. In the frontend folder, create a file named Prod.Dockerfile and add the following rows.

Code Listing 55: Final image Dockerfile build step

#step 1: build

FROM microsoft/dotnet:sdk AS build-stage

RUN mkdir app

WORKDIR /app

The instruction FROM is the same as the development image, but in this case, we also use the instruction AS. This permits us to name this first stage to remind us that this is the step where we build the application: build-stage. After the creation of the app folder and the set of the working directory, in the development Dockerfile we copied our application files on the image. Now we want to do something different.

Code Listing 56: Files copy and dependencies restore of the final image Dockerfile build step

COPY frontend.csproj .

RUN dotnet restore

COPY ./src .

First, we want to copy only the project file (frontend.csproj) that contains the project configuration needed to run the command dotnet restore, which downloads all the project dependencies from NuGet. After the restore, we copy the ./src folder to the app folder. This change permits us to effortlessly resolve the problem of the Views and wwwroot default path of ASP.NET Core. These folders are in the root project folder by default, while we find them in the src folder. If you remember, we have solved this problem in the development image by changing the content root when we create the web host builder (Code Listing 57).

When we publish the application, the Views folder is encapsulated in a DLL that for us is named frontend.Views.dll. The wwwroot folder instead is copied without changes. At runtime, when we execute the compiled application, ASP.NET Core doesn't find our views if they are in a different folder. Moreover, the wwwroot folder will be not be copied for the same reason.

Copying the src folder in the destination app folder solves the problem, but our content root changed and doesn't work because the src folder doesn't exist. We can solve this problem with a simple #if directive.

Code Listing 58: Content root conditional configuration

WebHost.CreateDefaultBuilder(args)

     #if DEBUG

     .UseContentRoot(

           Path.Combine(Directory.GetCurrentDirectory(), "src"))

     #endif

     .UseStartup<Startup>();

Returning to our Dockerfile, we publish the application with the Release configuration in a specific folder named out.

Code Listing 59: .NET Core publish command in Dockerfile

RUN dotnet publish -c Release -o out

The Dockerfile syntax permits us to have more than one stage in a single file so that we can add the following statements to the Prod.Dockerfile.

Code Listing 59: Final image Dockerfile run step

#step 2: run

FROM microsoft/dotnet:aspnetcore-runtime

WORKDIR /app

COPY --from=build-stage /app/out .

ENTRYPOINT dotnet frontend.dll

Starting from the aspnetcore-runtime image, we set the existing folder /app as the working directory and copy from the previous stage the content of the /app/out (the publish artifacts). Now we are ready to set the entry point of the new image.

Perfect! We are now ready to create our final image. From the terminal window, go to the frontend folder and execute the following command.

Code Listing 60: Docker final image build command

docker build -t frontend:v1 -f Prod.Dockerfile .

We have tagged our image with the name frontend:v1 to avoid applying the latest version. Moreover, our Dockerfile has a nonstandard name, so we need to specify the -f parameter with the correct Dockerfile name. The result is the following.

Docker final image build command output

Figure 22: Docker final image build command output

You can run the image with the following command.

Code Listing 61: Command to run a container from the final image

docker run -it -p 80:80 --name frontend frontend:v1

However, the use of docker-compose is more convenient. So let’s duplicate the docker-compose.yml previously created, rename it in docker-compose.prod.yml, and change the frontend service as follows:

Code Listing 62: Docker Compose frontend service for the final image

frontend:

  build:

    context: ./frontend

    dockerfile: Prod.Dockerfile

  ports:

    - 80:80

  depends_on:

    - sqlserver

This example shows us that the build setting is not the path of the Dockerfile, but the execution context of the build. So, if the Dockerfile name is not standard, you need to make the execution context (./frontend) and the Dockerfile name (Prod.Dockerfile) explicit. In this case, the port is 80, and we do not need a volume. We can start our container with the following command.

Code Listing 63: Production Docker Compose execution

docker-compose -f docker-compose.prod.yml up

Now, if you open the browser and navigate to the http://localhost address, you can see that everything works fine. But if you navigate to the customer's controller, you’ll see the following error.

Database access error with the production configuration

Figure 23: Database access error with the production configuration

This error occurs because we do not execute the database update from our migration scripts, but the question is: should we permit, in the production environment, the update of the database by a migration launched from the frontend? The answer depends on your update strategy. In a real deployment pipeline, manual or automatic, you probably have a backup/snapshot task to restore the last version in case of problems with the new release. The best way is to have a separate task of the deployment pipeline that launches a script, manually or automatically (it also depends on the target stage of your deployment), that updates the database to a new version.

There are also other problems when executing the database update from an application container. For example, if your frontend is deployed in an environment that can automatically scale depending on the traffic (the cloud), each instance created will try to update the database. The Microsoft official documentation reports as follows:

Database migration should be done as part of deployment, and in a controlled way. Production database migration approaches include:

  • Using migrations to create SQL scripts and using the SQL scripts in deployment.
  • Running the dotnet ef database update from a controlled environment.

EF Core uses the _MigrationsHistory table to see if any migrations need to run. If the DB is up to date, no migration is run.

You can find the documentation here.

If you want to create a SQL script starting from the migration, you can execute the following command from the terminal (in the frontend folder).

Code Listing 64: SQL script generation from the Entity Framework migration

dotnet ef migrations script --idempotent --output "script.sql"

The idempotent parameter generates a script that can be used on a database at any migration. The SQL Server image contains a tool named sqlcmd that allows us to execute the SQL command from the command line. If the database does not exist, you can create it with the following command

Code Listing 65: Database creation with the docker exec command

docker exec -it myapp_sqlserver_1 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Password_123' -Q 'CREATE DATABASE [myapp]'

The parameter -S specifies the server, which in this case is localhost because we execute the command from the SQL Server container. The parameters -U and -P set the username and password to connect with the database. The parameter -Q sets the query that you want to execute.

To run the created script, we need to copy it in the container with the docker cp command.

Code Listing 66: SQL script copy to the container with the docker cp command

docker cp script.sql myapp_sqlserver_1:/script.sql

To execute the script, you can use the sqlcmd tool with the option -d to indicate the database (myapp in our case), and the script name with the parameter -i, as follows.

Code Listing 67: SQL script execution in the container with the docker exec command

docker exec -it myapp_sqlserver_1 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Password_123' -d myapp -i script.sql

Another way to do the same thing is to create a custom image from the SQL Server image with the script and the commands to execute it, but again, this is not the best practice for the production environment.

Speaking of the production environment, you need to make the main settings of your container configurable, like the database connection string or the execution environment.

The base template of ASP.NET Core configures only the development, staging, or production environments, but you can add all the stages you want. This value is stored in the environment variable named ASPNETCORE_ENVIRONMENT, so you can set any value you want. The good news is that, when publishing an ASP.NET Core application with the Release configuration, the ASPNETCORE_ENVIRONMENT is set to Production. With Docker, you can change this value by using the -e parameter with docker run, or with the block environment of the docker-compose syntax. In the end, this configuration doesn't impact the final image build.

The database connection string is particularly interesting because it contains information that you probably would like to be able to change. You can make it configurable with the creation of an environment variable that contains the connection string value. We need to add the following row in Dockerfile and Prod.Dockerfile.

Code Listing 68: Docker environment variable creation for the SQL Server connection string

ENV SQLSERVER_CONNECTIONSTRING = 'Server=sqlserver; Database=myapp; User=sa; Password=Password_123; MultipleActiveResultSets=true;'

The instruction ENV declares an environment variable (in our case, the SQLSERVER_CONNECTIONSTRING variable) and its default value. To use it, we only need to read the connection string from the environment variables in the Configure method of the Startup.cs file, and set it to the MyAppDbContext configuration.

Code Listing 69: Retrieve connection string from the environment variable

var connectionString = Environment

     .GetEnvironmentVariable("SQLSERVER_CONNECTIONSTRING");

services.AddDbContext<MyAppDbContext>(

     o => o.UseSqlServer(connectionString));

From this moment on, we are able to change the connection string when executing a container from the frontend image.

Ok, it's time to publish our image. First of all, we have to decide where to publish the container. For our example, we can use Docker Hub, which requires an account (it's free); if you have downloaded the Docker Tools from the official site, you already have one.

Docker Hub Repository creation

Figure 24: Docker Hub Repository creation

So, go to the Docker Hub and log in with your credentials. On the Repositories page, click Create a Repository, choose a name for the repository (myapp, in our case), and optionally add a description. Click Save to continue (Figure 24).

Your repository is now ready. From the terminal, log in to the Docker Hub with your credentials, using the docker login command.

Docker Hub login command

Figure 25: Docker Hub login command

Before publishing, we need to rename our image to meet the registry requirements. Execute the following command from the terminal.

Code Listing 69: Docker command to change image tag

docker tag frontend:v1 apomic80/myapp:frontend-v1

The docker tag permits us to rename the image tag from frontend:v1, created in the last build, to <account_username>/<repository_name>:<tag> (in our case, apomic80/myapp:frontend-v1). Now we can publish the image with the docker push command.

Docker push image execution output

Figure 26: Docker push image execution output

From now on, we can deploy a frontend container everywhere!

No comments:

Post a Comment