Saturday, July 4, 2020

Microservices Design Guide read


Microservices is a trending topic among software engineers today. Let’s understand how we can build truly modular, business agile IT systems with Microservices architectural style.

Microservices Concept

Microservices architecture consists of collections of light-weight, loosely-coupled services. Each service implements a single business capability. Ideally, these services should be cohesive enough to develop, test, release, deploy, scale, integrate, and maintain independently.

Formal Definition Microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”-  James Lewis and Martin Fowler
  • Each service is a light-weight, independent, and loosely-coupled business unit.
  • Each service has its own codebase, managed and developed by a small team (mostly in an agile environment).
  • Each service is responsible for a single part of the functionality (business capability), and does it well.
  • Each service can pick the best technology stack for its use cases (no need to stick into one framework throughout the entire application).
  • Each service has its own DevOp plan (test, release, deploy, scale, integrate, and maintain independently).
  • Each service is deployed in a self-contained environment.
  • Services communicate with each other by using well-defined APIs (smart endpoints) and simple protocols like REST over HTTP (dumb pipes).
  • Each service is responsible for persisting its own data and keeping external state (Only if multiple services consume the same data, such situations are handled in a common data layer).

Microservice are made to scale large systems. They are great enablers for continuous integration and delivery too.

The Scale Cube: 3-Dimensional Model for Scalability (Image: Nginx Blog)
  • Independent scaling — Microservices architecture supports Scale Cube concept described in the excellent book The Art of Scalability. When developing microservices to achieve functional decomposition, the application automatically scales via Y axis. When the consumption is high, microservices can scale via X axis by cloning with more CPU and memory. For distributing data across multiple machines, large databases can be separated (sharding) into smaller, faster, more easily managed parts enabling Z axis scaling.
  • Independent releases and deployments — Bug fixes and feature releases are more manageable and less risky, with microservices. You can update a service without redeploying the entire application, and roll back or roll forward an update if something goes wrong.
  • Independent development — Each service has its own codebase, which is developed, tested, and deployed by a small focused team. Developers can focus on one service and relatively-small scope only. This results in enhanced productivity, project velocity, continuous innovation, and quality at source.
  • Graceful degradation — If a service goes down, its impact won’t propagate to the rest of application and result in a catastrophic failure of the system, allowing a certain degree of anti-fragility to manifest.
  • Decentralized governance — Developers are free to pick the technology stacks and make design standards and implementation decisions that are best suited for their service. Teams do not have to get penalized due to past technology decisions.

Independent services alone cannot form a system. For the true success of microservices architecture, significant investments are required to handle cross-system concerns like:

  • Service replication — a mechanism by which services can easily scale based upon metadata
  • Service registration and discovery — a mechanism to enables service lookup and finds the endpoint for each service
  • Service monitoring and logging — a mechanism to aggregate logs from different microservices and provide a consistent reporting
  • Resiliency — a mechanism for services to automatically take corrective actions during failures
  • DevOps — a mechanism for handling continuous integration and deployment (CI and CD)
  • API gateway — a mechanism for for providing an entry point for clients

Middleware & Design Patterns

API Gateway Style Microservices Architecture (Image: Microsoft Azure Docs) — this is the most common design pattern used in microservices. API Gateway is an intermediary with minimal routing capabilities and just acting as a ‘dumb pipe’ with no business logic inside. In general, API Gateway allows you to consume a managed API over REST/HTTP. Other types of microservices integration patterns: Point-to-point style (invoking services directly from client side app) and Message Broker style (implementing asynchronous messaging).

API Gateway acts as a single entry point for all clients as well as an edge service for exposing microservices to the outside world as managed APIs. It sounds like a reverse proxy, but also has additional responsibilities like simple load-balancing, authentication & authorization, failure handling, auditing, protocol translations, and routing. The development team can select one of the following approaches to implement an API Gateway.

  • Build it programmatically — to have better customizations and control
  • Deploy an existing API gateway product — to save initial development time and use advanced built-in features (Cons: Such products are vendor-dependent and not completely free. Configurations and maintenance often can be tedious and time-consuming)

Some design patterns that explain API Gateway behaviour are as follows (Read Design patterns for microservices).

  • Gateway Aggregation — aggregate multiple client requests (usually HTTP requests) targeting multiple internal microservices into a single client request, reducing chattiness and latency between consumers and services.
  • Gateway Offloading — enable individual microservices to offload their shared service functionality to the API gateway level. Such cross-cutting functionalities include authentication, authorization, service discovery, fault tolerance mechanisms, QoS, load balancing, logging, analytics etc.
  • Gateway Routing (layer 7 routing, usually HTTP requests)— route requests to the endpoints of internal microservices using a single endpoint, so that consumers don’t need to manage many separate endpoints

Note that an API Gateway should always be a highly-available and performant component, since it is the entry point to the entire system.

Eventual Consistency between microservices based on event-driven async communication (Image: microsoft.com)

For different parts of the application to communicate with each other irrespective of sequence of messages (asynchronous) or what language they use (language agnostic), event bus can be used. Most of event buses support publish/subscribe, distributed, point to point, and request-response messaging. Some event buses (like in Vert.x) allow client side to communicate with corresponding server nodes using the same event bus, which is a cool feature loved by full-stack teams.

Interservice communication using Service Mesh style (Image: Microservices in Practice)
How Service Meshes are used in an application (Image: christianposta.com)

Service Mesh implements Sidecar pattern by providing helper infrastructure for interservice communication. It includes features like resiliency (fault tolerance, load balancing), service discovery, routing, observability, security, access control, communication protocol support etc.

How Service Meshes fit in the network stack (Image: christianposta.com)

In practice, a Sidecar instance is deployed alongside each service (ideally in the same container). They can communicate through primitive network functions of the service. The Control Plane of Service Mesh is separately deployed to provide central capabilities like service discovery, access control, and observability (monitoring, distributed logging). Most importantly, Service Mesh style allows developers to decouple network communication functions from microservice code and keep services focused only on the business capabilities. (Read: Netflix PranaService Mesh for Microservices)

☝ Even though above images indicate direct connections between services, the nice way to handle the interservice communication would be using a simple Event Bus as a Mediator to keep coupling at a minimum level.

Implementing Backends for Frontends and Aggregator patterns at API Gateway level (Image: microsoft.com)

If the application needs to tailor each API to suit the client app type (web, mobile, different platforms), different rules (configs) can be enforced via a facade or can serve separate builds based on client capabilities. This can be implemented at the API Gateway level itself or in parallel to the services level. This pattern is useful for providing specific user experiences. However, the development team should be careful enough to keep BFFs upto a manageable limit.


Best Practices

✅ Domain Driven Design — Model services around the business domain.

To handle large models and teams, Domain Driven Design (DDD) can be applied. It deals with large models by dividing them into different Bounded Contexts and being explicit about their interrelationships and underling domain. These bounded contexts can be converted into separate microservices at the application design level (Read: Bounded Context in Domain-Driven Design).

✅ Decentralized Data Management (Avoid shared databases)When multiple services consume a shared data schema, it can create tight coupling at data layer. To avoid it, each service should have its own data access logic and separate data store. The development team is free to pick the data persistence method which best fit to each service and nature of data.

Avoid shared data stores and data access mechanisms (Image: christianposta.com)

✅ Smart endpoints and dumb pipes — Each service owns a well-defined API for external communication. Avoid leaking implementation details. For communication, always use simple protocols such as REST over HTTP.

✅ Asynchronous communication — When asynchronous communication is used across services, the data flow does not get blocked for other services.

Synchronous vs. asynchronous messaging (Image: microsoft.com)

✅ Avoid coupling between services — Services should have loose coupling and high functional cohesion. The main causes of coupling include shared database schemas and rigid communication protocols.

✅ Decentralize development — Avoid sharing codebases, data schemas, or development team members among multiple services/projects. Let developers focus on innovation and quality at source.

✅ Keep domain knowledge out of the gateway. Let the gateway handle routing and cross-cutting concerns (authentication, SSL termination).

✅ Token-based Authentication — Instead of implementing security components at each microservices level which is talking to a centralized/shared user repository and retrieve the authentication information, consider implementing authentication at API Gateway level with widely-used API security standards such as OAuth2 and OpenID Connect. After obtaining an auth token from the auth provider, it can be used to communicate with other microservices.

Microservice security with OAuth2 and OpenID Connect (Image: Kasun’s Blog)

✅ Event-driven nature — Human beings are autonomous agents that can react to events. Can’t our systems be like that? (Read: Why Microservices Should Be Event Driven: Autonomy vs Authority)

✅ Eventual consistency — Due to the high cohesiveness in microservices, it is hard to achieve strong consistency throughout the system. Development team will have to handle eventual consistency as it comes.

✅ Fault tolerance — Since the system comprises of multiple services and middleware components, failures can take place somewhere very easily. Implementing patterns like circuit-breaking, bulkhead, retries, timeouts, fail fast, failover caching, rate limiters, load shedders in such vulnerable components can minimize the risks of major failures. (Read: Designing a Microservices Architecture for Failure)

✅ Product engineering — Microservices will work well as long as it is engineered as a product, not as a project. It’s not about making it work somehow and delivering before the deadlines, but about a long term commitment of engineering excellence.


Microservices in Practice

Microservices architecture best fits for:

  • Applications with high scalability needs
  • Projects with high release velocity
  • Business cases with rich domains or many subdomains
  • Agile environments with small, cross-functional development teams developing large products collaboratively (Read: The Real Success Story of Microservices Architectures)
  • 🥇Vert.x — light-weight, simple to understand/implement/maintain, polyglot (support many languages), event-driven, non-blocking, so far the best performance and scalability when handling high concurrency needs with minimal hardware, unopinionated (only provides useful bricks, developers have freedom to be innovative and carefully build their applications, not like traditional restrictive frameworks)
  • 🥈Akka — satisfactory performance, implements actor model, good for reactive & event-driven microservices
  • 🥉SpringBoot/Cloud — easy to start (familiar paradigms), based on the good old Spring framework, a little bit heavy framework, many integrations available, large community support
  • Dropwizard — good for rapid development of RESTful web services, comes fully-loaded with some of nice Java tools & libraries like Google Guava, Jetty server, Logback, Hibernate Validator, Joda Time, Jersey, and Jackson.
  • Containers — good for enforcing DevOp objectives (rapid development, reduced time to market, seamless scaling)
  • Cloud — good for building reliable and scalable infrastructure to serve geographically-dispersed users
  • Serverless — good for handling highly volatile traffics
  • Maintain own IT infrastructure — good for those who have high capacities and resources to build entire infrastructure
  • Self-Contained Systems — assemble software from independent systems (like verticals in microservices)
  • Micro Frontends — divide monolith web UIs into independent features that can be developed as self-contained UI components and communicate with directly with microservices

Domain Driven Design (DDD) | Bounded Context (BC) | Polyglot Persistence (PP)Command and Query Responsibility Segregation (CQRS) | Command Query Separation (CQS) | Event-Sourcing (ES) | CAP Theorem | Eventual Consistency | Twelve-Factor App | SOLID Principles |


Architecture Suggestions

Microservices architecture for an online shopping application (Image: microsoft.com) — This architecture is proposed by Microsoft developers using Microsoft technologies. Here, the API Gateway has been tailored to treat web and mobile users differently. For data layer, data store technologies are carefully selected according to the business capabilities (relational databases for structured data, Redis for temporary data caching, MongoDB and CosmosDB for unstructured data). Interservice communication handled by the event bus. Keeping technologies aside, this is the most common integration pattern used in microservices-based applications.
Microservices architecture for an application which displays realtime updates to end users using large amounts of input data streams coming from various event sources (e.g. traffic data, weather readings, stock market feeds, social media posts, sensor outputs). These input data streams are initially collected by an event log implemented using Kafka. It persists data on disk and thus can be used for batched consumption purposes (analytics, reporting, data science, backup, auditing) or sent for real time consumption purposes (operational analytics, CEP, admin dashboards, alert apps). However, according to this diagram, the continuous incoming stream is divided into micro-batches with specified intervals using Spark and fed into the WSO2 Siddhi CEP engine. Then it identifies the events and persists them in unstructured forms using MongoDB data stores. Microservices consume these data and display to the end users. If you carefully look at the design, since Vert.x event bus has the ability to create connections with frontend UI components, that feature has been used to efficiently update only the relevant parts in the UI. Keeping technologies aside, this is a great architecture for event-driven non-blocking microservices-based application.
Cloud native omni-channel microservices architecture for an order management application (Image: ibm.com) — One major specialty in this design is, instead of using an API Gateway, IBM architects have proposed an edge layer with separate backends for each clientside channel (mobile apps, web apps, IOT devices, API consumers). Another specialty is that the microservices layer is divided into 2 sub layers called Business Logic layer and Foundational layer. Foundational Layer (a.k.a. Core Services Layer) deals with persistence and integration tasks using various cloud native services (cloud data stores, Elasticsearch engines that integrate and index a Watson conversation). Business Logic layer integrates data from Foundational layer and delivers meaningful business capabilities. This would be a great architecture for serving very high number of user base who are geographically-dispersed and access to application via various platforms.

Clould and Best practices adoption and benefits - StackRox Kubernetes Security Platform


“Seeing what needs to be fixed right away, right there in the dashboard, is a huge value to us.” 
– Cyrus Makalinaw, security and privacy office

ARMUS is a leading global clinical registry software and service provider that gives outcomes improvement powers to hospitals, physicians, and clinical quality coordinators by delivering impactful data analytics and reporting tools. The company’s goal is to use real-time actionable clinical, financial, and patient-reported information to sustain better outcomes, reduce post procedure complications, lower costs, and improve patients’ lives.
With each transaction, the company’s technology is making data-driven classifications, forecasts, and risk assessments based on statistical analysis of multiple, large data sources. Given the sensitive nature of protected health information and the HIPAA standards for sensitive patient data protection, robust and reliable data security is vital to ARMUS.
Meeting the need for always-on data As it grew to become an industry leader, ARMUS recognized a need to dynamically scale resources up and down so it could drive faster updates than previously established processes. Supporting hospitals and large medical facilities that are operating 24/7 renders maintenance windows impossible. The company transitioned to containers and Kubernetes to deliver software updates without making data unavailable. 
With its transition to the cloud-native application development stack, ARMUS immediately realized it needed a new approach to security. “The old tools are not equipped for this environment, and we have to find our most important vulnerabilities and remediate them as soon as possible,” says Cyrus Makalinaw, security and privacy officer for ARMUS. 
Leveraging StackRox to secure patient data Makalinaw heads up a small team that has to run as efficiently as possible, and they work very closely with the operations team. The team relies on the StackRox Kubernetes Security Platform – its automation, tie in with CI/CD, and on-going compliance checks has helped the team operationalize container security.
“Seeing what needs to be fixed right away, right there in the dashboard, is a huge value to us,” says Makalinaw. “The StackRox software also showed us where our Kubernetes network connections weren’t right – we had servers that could reach each other that shouldn’t.” The fact that StackRox automatically generates the correct Kubernetes network segmentation policies helps streamline operations. “Anything that helps to automate processes is invaluable,” says Makalinaw.
ARMUS relies on StackRox to secure its Kubernetes and container environments across the full application development life cycle. In particular, ARMUS leverages StackRox for:
Compliance – StackRox provides the automated and on-demand controls that ARMUS needs to support and demonstrate compliance with industry standards including SOC 2 and HIPAA.
Risk-based prioritization – StackRox provides a dynamic, multi-factor risk assessment that enables ARMUS to immediately triage the highest-risk deployments in the environment at all times.
Threat detection – StackRox automatically detects container attacks in seconds, using rules, whitelists, and behavioral modeling for runtime detection and response. ARMUS is able to spin up a new server while doing forensics on any targeted data without impact to its clients.
Vulnerability management - StackRox enforces ARMUS’ policies across the entire life cycle based on vulnerability information — at build time with CI/CD pipeline integration, at deploy time using dynamic admission control, and at runtime with its Kubernetes-native enforcement. 

Friday, July 3, 2020

73 Azure Security Best Practices Everyone Must Follow



Infrastructure-as-a-Service (IaaS) adoption continues its upward trend as the fastest growing public cloud segment (forecasted to grow 27.6% in 2019 to reach $39.5 billion, up from $31 billion in 2018). Not surprisingly, in Microsoft’s latest Security Intelligence Report from 2017, cloud service users saw a 300% year-over-year increase in attacks against them, with over a third of attacks against Azure services in particular originating from China.

With the rapid adoption of IaaS providers like Azure, the threat environment has evolved, but with the right preparation, any company can implement cloud security practices for services that significantly reduce the potential impact of an attempted breach.

While Microsoft provides security capabilities to protect enterprise Azure subscriptions, cloud security’s shared responsibility model requires Azure customers to deliver security “in” Azure. Below are Azure best practices, derived from customers and Center for Internet Security (CIS) recommendations for 7 critical areas of security in Azure that everyone must follow to ensure their Azure subscriptions are secure. challenges, detailed best practices around Azure and applications deployed in Azure, and how CASBs can secure your Azure infrastructure.


1. Security Policy

Ensure the following are set to on for virtual machines:
‘OS vulnerabilities’ is set to on.
Enable OS vulnerabilities recommendations for virtual machines. When this setting is enabled, it analyzes operating system configurations daily to determine issues that could make the virtual machine vulnerable to attack. The policy also recommends configuration changes to correct these vulnerabilities.
‘Endpoint protection’ is set to on.
Enable endpoint protection recommendations for virtual machines. When this setting is enabled, Azure Security Center recommends endpoint protection be provisioned for all Windows virtual machines to help identify and remove viruses, spyware, and other malicious software.
‘JIT network access’ is set to on.
Enable JIT network access for virtual machines. When this setting is enabled, the Security Center locks down inbound traffic to your Azure VMs by creating an NSG rule. You select the ports on the VM to which inbound traffic should be locked down. Just-in-time VM access can be used to lock down inbound traffic to your Azure VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed.
2. Identify and Access Management
Ensure that for all users, multi-factor authentication is enabled.
Enable multi-factor authentication for all user credentials who have write access to Azure resources. Multi-factor authentication requires an individual to present a minimum of two separate forms of authentication before access is granted. Multi-factor authentication provides additional assurance that the individual attempting to gain access is who they claim to be. With multi-factor authentication, an attacker would need to compromise at least two different authentication mechanisms, increasing the difficulty of compromise and thus reducing the risk.
Ensure that users can consent to apps accessing company data on their behalf’ is set to no.
Require administrators to provide consent for the apps before use. Until you are running Azure Active Directory as an identity provider for third-party applications, do not allow users to use the identity outside of your cloud environment. User’s profile information contains private information such as phone number and email address which could then be sold off to other third parties without requiring any further consent from the user.
Ensure that ‘restrict access to Azure AD administration portal’ is set to yes.
Restrict access to Azure AD administration portal to administrators only. Azure AD administrative portal has sensitive data. You should restrict all non-administrators from accessing any Azure AD data in the administration portal to avoid exposure.
3. Storage Accounts

Ensure the following are set to enabled:
‘Secure transfer required’ is set to enabled.
Enable data encryption is transit. The secure transfer option enhances the security of your storage account by only allowing requests to the storage account by a secure connection. For example, when calling REST APIs to access your storage accounts, you must connect using HTTPS. Any requests using HTTP will be rejected when ‘secure transfer required’ is enabled. When you are using the Azure files service, connection without encryption will fail, including scenarios using SMB 2.1, SMB 3.0 without encryption, and some flavors of the Linux SMB client.
‘Storage service encryption’ is set to enabled.
Enable data encryption at rest for blobs. Storage service encryption protects your data at rest. Azure storage encrypts your data as it’s written in its data centers, and automatically decrypts it for you as you access it.
4. SQL Services

On SQL database or servers, ensure the following are set to on:
‘Auditing’ is set to on.
Enable auditing on SQL Servers. Auditing tracks database events and writes them to an audit log in your Azure storage account. It also helps you to maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations.
‘Threat detection’ is set to on.
Enable threat detection on SQL Servers. SQL Threat Detection provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users will receive an alert upon suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database access patterns. SQL Threat Detection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat.
‘Transparent data encryption’ is set to on.
Azure SQL Database transparent data encryption helps protect against the threat of malicious activity by performing real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application.
5. Networking

Ensure the following are disabled on network security groups from internet:
Disable RDP.
The potential security problem with using RDP over the Internet is that attackers can use various brute-force techniques to gain access to Azure Virtual Machines. Once the attackers gain access, they can use your virtual machine as a launch point for compromising other machines on your Azure Virtual Network or even attack networked devices outside of Azure.
Disable SSH.
The potential security problem with using SSH over the Internet is that attackers can use various brute force techniques to gain access to Azure Virtual Machines. Once the attackers gain access, they can use your virtual machine as a launch point for compromising other machines on your Azure Virtual Network or even attack networked devices outside of Azure.
Disable Telnet (port 23).
Disable unrestricted access on Network Security Groups (i.e. 0.0.0.0/0) on TCP port 23 and restrict access to only those IP addresses that require it in order to implement the principle of least privilege and reduce the possibility of a breach. TCP port 23 is used by the Telnet server application (Telnetd). Telnet is usually used to check whether a client is able to make TCP/IP connections to a particular service.
6. Virtual Machines
Install endpoint protection for virtual machines.
Installing endpoint protection systems (antivirus/anti-malware) provides real-time protection capability that helps identify and remove viruses, spyware, and other malicious software, with configurable alerts when known malicious or unwanted software attempts to install itself or run on your Azure systems.
Enable latest OS patch updates for virtual machines.
Ensure Latest OS Patches for virtual machines. Windows and Linux virtual machines should be kept updated to:


Address a specific bug or flaw
Improve an OS or application’s general stability
Fix a security vulnerability
Enforce disk encryption on virtual machines.
Ensure that data disks (non-boot volumes) are encrypted, where possible. Encrypting your IaaS VM’s data disks (non-boot volume) ensures that its entire content is fully unrecoverable without a key and protects the volume from unwarranted reads.
7. Miscellaneous
Secure the subscription.
A secure Azure cloud subscription provides a core foundation upon which subsequent development and deployment activities can be conducted. An engineering team should have the capabilities to deploy and configure security in the subscription including elements such as alerts, ARM policies, RBAC, Security Center policies, JEA, Resource Locks, etc. Likewise, it should be possible to check that all settings are in conformance to a secure baseline.
Minimize the number of admins/owners.
Each additional person in the Owner/Contributor role increases the attack surface for the entire subscription. The number of members in this role must be kept as low as possible.
Do not grant permissions to external accounts (i.e., accounts outside the native directory for the subscription).
Non-AD accounts (i.e. xyz@hotmail.com) subject your cloud assets to undue risk. These accounts are not managed to the same standards as enterprise tenant identities.

Wednesday, July 1, 2020

Integration of API Management with Azure Service Bus Queues (with Send and Receive messages)

Microsoft Azure platform offers quite a few messaging services options. In a prior article I wrote about an interesting scenario and effective integration points between API Management and Azure Service Bus Relay.

In this article I am going to cover another interesting option for integrating API Management with Azure Service Bus Queues and Topics.

In general, queued messages must be sent to a queue by a Sender (1), and received from a queue by a Receiver (2). In case of Azure Service Bus, sender and receiver can be anywhere, in the cloud or on-premises, because Azure Service Bus has internet-accessible endpoints to send messages to, and to receive messages from:

So, how does API Management fit into this picture, and what will be the benefits of integrating it with the Azure Service Bus Queues?

It should be expected that API Management can fit on both sides, helping Sender to send messages and helping Receiver to receive messages:

If that is the case, the benefits will be:

  1. Sender does not have to deal with Azure Service Bus at all, instead it sends messages to what seems to be a “regular” API hosted in an API Management’s API Gateway using any security and communication protocol it wants (marked as (1) on the diagram above).

  2. Receiver does not have to deal with Azure Service Bus either. Instead, it receives messages from an API Management’s API Gateway using any security and communication protocol it wants (marked as (4) on the diagram above).

  3. Items 1 and 2 above effectively result in a simplified Sender application and Receiver API because the existence of API Management infrastructure and even Azure Service Bus itself becomes completely transparent for them. That means you can build and deploy anywhere just “regular” API Client applications and “regular” API Service applications, and yet enable them with asynchronous messages delivery at zero implementation cost shifting all the challenges onto an API Management infrastructure.

  4. All other typical benefits of using API Management also apply here, such as managed security, monitoring, alerting, analytics and dashboards, API Catalog with APIs’ life-cycle management, etc.

To achieve the benefits listed above, an API Management infrastructure must be “smart” enough to know how to natively send (2) and receive (3) messages to and from Azure Service Bus Queues. This is not an easy task for API Management. Actually, sending messages is less challenging because it can be implemented via a REST API call to the internet-accessible Azure Service Bus endpoint (there are still few challenges there, because that endpoint requires specially-built security headers with Shared Access Signature, SAS). Receiving messages by an API Management from Azure Service Bus is an even more challenging task, because on top of the same special SAS security, it also requires at least an additional polling mechanism to get messages from a queue once they become available.

Nevatech Sentinet API Management can natively send and receive Azure Service Bus Queue messages because within its API Gateway engine it can leverage Microsoft WCF technology, which has already implemented all that functionality in its NetMessagingBinding for SOAP services. All this can be done simply using only the Sentinet API Management Portal.

You can use a regular SOAP Client (for example SOAPUI) and a regular SOAP service (WCF Customer Search sample service shipped with Sentinet free trial edition). First, we need to enable existing Sentinet Node (API Gateway) with the knowledge of Azure Service Bus. Then we can design two virtual (façade) SOAP services hosted in the same Sentinet Node (API Gateway):

Virtual Sender receives a message from an API Client application over regular transport (1), and then sends it to an Azure Service Bus Queue (2).

Virtual Receiver receives a message from an Azure Service Bus Queue (3), and then sends it to a physical (backend) API over regular transport (4).

If you do not have Azure Service Bus service created yet for your subscription, you can create it using Azure Portal. Sentinet needs to know only 2 properties of your Service Bus service, its namespace/host name (for example, nvttest.servicebus.windows.net) and its key for Shared access policy:

In the Sentinet Console you will enter the Azure Service Bus namespace to add sb://nvttest.servicebus.windows.net address to the known base addresses of a Sentinet Node (API Gateway). Notice, we add this address with the sb:// protocol, which is Microsoft’s private binary protocol best suited for native connections with Azure Service Bus. We also select the Shared Access Signature option to enable the Sentinet Node with future receivers of messages from the queues created in this Azure Service Bus namespace.

From that point you can create any number of virtual services (façade services) hosted in a Sentinet Node, which can receive messages from any Queue or Topics created in this Azure Service Bus namespace.

Create a queue in your Azure Service Bus namespace using all defaults, for simplicity. Name your queue so that it ends with .svc, as Sentinet will deal with these SOAP services through a native WCF activation that requires .svc extension in endpoint addresses.

You register a regular physical backend service in the Sentinet from its WSDL and design Virtual Receiver service hosted in the Sentinet Node using drag-and-drop User Interface. When you select the virtual service endpoint, use sb://nvtest.servicebus.windows.net/customersearchqueue.svc address for the endpoint and SOAP - SB_AZURE_QUEUES for its policy.

You do not have to configure anything else including security, because your Sentinet Node has already been configured with Shared Access Signature for that Azure Service Bus namespace.

Next, you design Virtual Sender service. You use the same drag-and-drop User Interface to virtualize Virtual Receiver service. In this case you do not need to define Service Bus endpoint, because it is known from Virtual Receiver service’s endpoint, but you have to define Shared Access Signature client identity. This will be the identity Virtual Sender will use to place message in the queue. This is all integrated into Sentinet API Management Portal:

Now after just few clicks and entries in the Sentinet API Management Portal you have two virtual services that can natively send and receive messages to and from Azure Service Bus Queue.

Using SOAPUI you can send a sample request message and see how it is delivered to the backend service. Detailed logging and tracing are also available out-of-the-box.

You can temporarily disable Virtual Receiver service:

In this case new messages will be stuck in the queue, because Virtual Receiver service is not available and not picking them up. But as soon as you re-enable Virtual Receiver service, messages will be delivered all the way to the physical (backend) service. This test proves asynchronous delivery of messages via Azure Service Bus messaging.

Conclusion

API Management can be used with Azure Service Bus messaging, while delivering valuable add-on benefits. This article does not cover REST Client and REST API use case scenario, but this can be implemented in Sentinet too.