Thursday, June 25, 2020

Quick Reference – Azure Design Patterns – Messaging patterns

The distributed nature of cloud applications requires a messaging infrastructure that connects the components and services, ideally in a loosely coupled manner in order to maximize scalability. Asynchronous messaging is widely used and provides many benefits, but also brings challenges such as the ordering of messages, poison message management, idempotency, and more.

Competing Consumers pattern

Enable multiple concurrent consumers to process messages received on the same messaging channel. This enables a system to process multiple messages concurrently to optimize throughput, to improve scalability and availability, and to balance the workload.

Using a message queue to distribute work to instances of a service
Competing Consumers Design Pattern Illustration

Pipes and Filters pattern

Decompose a task that performs complex processing into a series of separate elements that can be reused. This can improve performance, scalability, and reusability by allowing task elements that perform the processing to be deployed and scaled independently.

Figure 2 - A solution implemented using pipes and filters
Pipes and Filters Design Pattern Illustration

Priority Queue pattern

Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. This pattern is useful in applications that offer different service level guarantees to individual clients.

Figure 1 - Using a queuing mechanism that supports message prioritization
Priority Queue Pattern Illustration

Queue-Based Load Leveling pattern

Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. This can help to minimize the impact of peaks in demand on availability and responsiveness for both the task and the service.

Figure 1 - Using a queue to level the load on a service
Queue Based Load Leveling Pattern Illustration

Scheduler Agent Supervisor pattern

Coordinate a set of distributed actions as a single operation. If any of the actions fail, try to handle the failures transparently, or else undo the work that was performed, so the entire operation succeeds or fails as a whole. 

Figure 1 - The actors in the Scheduler Agent Supervisor pattern
Scheduler Agent Supervisor Pattern Illustration

References

  • https://docs.microsoft.com/en-us/azure/architecture/patterns/competing-consumers
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/pipes-and-filters
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/priority-queue
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/queue-based-load-leveling
  • https://docs.microsoft.com/en-us/azure/architecture

Quick Reference – Azure Design Patterns – Data Management Patterns

Data management is the key element of cloud applications and influences most of the quality attributes. Data is typically hosted in different locations and across multiple servers for reasons such as performance, scalability or availability, and this can present a range of challenges. For example, data consistency must be maintained, and data will typically need to be synchronized across different locations.

Cache-Aside pattern

Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store.

Using the Cache-Aside pattern to store data in the cache
Cache Aside Design Pattern Illustration

Command and Query Responsibility Segregation (CQRS) pattern

Segregate operations that read data from operations that update data by using separate interfaces. This pattern can maximize performance, scalability, and security; support evolution of the system over time through higher flexibility; and prevent update commands from causing merge conflicts at the domain level.

A basic CQRS architecture
CQRS Design Pattern Illustration

Event Sourcing pattern

Use an append-only store to record actions taken on data, rather than the current state, and use the store to materialize the domain objects. In complex domains this can avoid synchronizing the data model and the business domain; improve performance, scalability, and responsiveness; provide consistency; and provide audit history to enable compensating actions.

An overview and example of the Event Sourcing pattern
Event Sourcing Design Pattern Illustration

Index Table pattern

Create indexes over the fields in data stores that are frequently referenced by queries. This pattern can improve query performance by allowing applications to more quickly locate the data to retrieve from a data store.

Figure 3 - Data is referenced by each index table
Index Table Design Pattern Illustration

Materialized View pattern

Generate prepopulated views over the data in one or more data stores when the data isn’t ideally formatted for required query operations. This can help support efficient querying and data extraction, and improve application performance.

Figure 1 shows an example of how the Materialized View pattern might be used
Materialized View Design Pattern

Sharding pattern

Divide a data store into a set of horizontal partitions or shards. This can improve scalability when storing and accessing large volumes of data.

Figure 2 - Storing sequential sets (ranges) of data in shards
Sharding Design Pattern Illustration

Static Content Hosting pattern

Deploy static content to a cloud-based storage service that can deliver them directly to the client. This can reduce the need for potentially expensive compute instances.

Figure 1 - Delivering static parts of an application directly from a storage service
Static Content Design Pattern Illustration

Valet Key pattern

Use a token that provides clients with restricted direct access to a specific resource, in order to offload data transfer from the application. This is particularly useful in applications that use cloud-hosted storage systems or queues, and can minimize cost and maximize scalability and performance.

Figure 1 - Overview of the pattern
Valet Key Design Pattern Illustration

Quick Reference – Azure Design Patterns – Availability Patterns

Availability defines the proportion of time that the system is functional and working. It will be affected by system errors, infrastructure problems, malicious attacks, and system load. It is usually measured as a percentage of uptime. Cloud applications typically provide users with a service level agreement (SLA), which means that applications must be designed and implemented in a way that maximizes availability.

Health Endpoint Monitoring: 

Implement health monitoring by sending requests to an endpoint on the application. The application should perform the necessary checks, and return an indication of its status.

Overview of the pattern
Health End Point Monitoring Design Pattern Illustration

Queue-Based Load Leveling

Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. This can help to minimize the impact of peaks in demand on availability and responsiveness for both the task and the service.

Figure 1 - Using a queue to level the load on a service
Queue Based Load Leveling Design Pattern Illustration

Throttling

Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. This can allow the system to continue to function and meet service level agreements, even when an increase in demand places an extreme load on resources.

Figure 1 - Graph showing resource use against time for applications running on behalf of three users
Throttling Design Pattern Illustration

Quick Reference – Azure Design Patterns – Performance and Scalability patterns

Performance is an indication of the responsiveness of a system to execute any action within a given time interval, while scalability is the ability of a system either to handle increases in load without impact on performance or for the available resources to be readily increased. Cloud applications typically encounter variable workloads and peaks in activity.

Cache-Aside pattern

Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store.

Using the Cache-Aside pattern to store data in the cache
Cache-Aside pattern Illustration

Command and Query Responsibility Segregation (CQRS) pattern

Segregate operations that read data from operations that update data by using separate interfaces. This can maximize performance, scalability, and security. Supports the evolution of the system over time through higher flexibility, and prevents update commands from causing merge conflicts at the domain level.

A basic CQRS architecture
CQRS Pattern Illustration

Event Sourcing pattern

Instead of storing just the current state of the data in a domain, use an append-only store to record the full series of actions taken on that data. The store acts as the system of record and can be used to materialize the domain objects. 

An overview and example of the Event Sourcing pattern
Event Sourcing pattern Illustration

Index Table pattern

Create indexes over the fields in data stores that are frequently referenced by queries. This pattern can improve query performance by allowing applications to more quickly locate the data to retrieve from a data store.

Figure 3 - Data is referenced by each index table
Index Table pattern Illustration

Materialized View pattern

Generate prepopulated views over the data in one or more data stores when the data isn’t ideally formatted for required query operations. This can help support efficient querying and data extraction, and improve application performance.

Figure 1 shows an example of how the Materialized View pattern might be used
Materialized View pattern Illustration

Priority Queue pattern

Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those with a lower priority. This pattern is useful in applications that offer different service level guarantees to individual clients.

Figure 1 - Using a queuing mechanism that supports message prioritization
Priority Queue pattern Illustration

Queue-Based Load Leveling pattern

Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. This can help to minimize the impact of peaks in demand on availability and responsiveness for both the task and the service.

Figure 1 - Using a queue to level the load on a service
Queue-Based Load Leveling pattern Illustration

Sharding pattern

Divide a data store into a set of horizontal partitions or shards. This can improve scalability when storing and accessing large volumes of data.

Figure 1 - Sharding tenant data based on tenant IDs
Sharding pattern Illustration

Static Content Hosting pattern

Deploy static content to a cloud-based storage service that can deliver them directly to the client. This can reduce the need for potentially expensive compute instances.

Figure 1 - Delivering static parts of an application directly from a storage service
Static Content Hosting pattern Illustration

Throttling pattern

Control the consumption of resources used by an instance of an application, an individual tenant, or an entire service. This can allow the system to continue to function and meet service level agreements, even when an increase in demand places an extreme load on resources.

Figure 1 - Graph showing resource use against time for applications running on behalf of three users
Throttling pattern Illustration

References

  • https://docs.microsoft.com/en-us/azure/architecture/patterns/cache-aside
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/index-table
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/materialized-view
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/priority-queue
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/queue-based-load-leveling
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/sharding
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/static-content-hosting
  • https://docs.microsoft.com/en-us/azure/architecture/patterns/throttling

eXAM Z 400 DEV OPS

What is DevOps?

We wrote an article before about this topic that may be useful to answer that question:

Is the exam difficult?

Yes, it covers a lot of topics and requires multiple skills. It is considered very difficult.

Which books would you recommend for this exam?

You can read the following books:

Are there some courses for this exam?

Yes, the following courses will be useful:

Could you provide some links to study, for this exam?

Yes. The following links will be useful:

Design a DevOps Strategy

Implement DevOps Development Processes

Implement Continuous Integration

Implement Continuous Delivery

Implement Dependency Management

Implement Application Infrastructure

Implement Continuous Feedback