Saturday, August 22, 2020

Can you expose your microservices with an API gateway in Kubernetes?

 


PUBLISHED IN APRIL 2019

 UPDATED IN DECEMBER 2019


Can you expose your microservices with an API gateway in Kubernetes?

Welcome to Bite-sized Kubernetes learning — a regular column on the most interesting questions that we see online and during our workshops answered by a Kubernetes expert.

Today's answers are curated by Daniele Polencic. Daniele is an instructor and software engineer at Learnk8s.

If you wish to have your question featured on the next episode, please get in touch via email or you can tweet us at @learnk8s.

Did you miss the previous episodes? You can find them here.

Can you expose your microservices with an API gateway in Kubernetes?

TL;DR: yes, you can. Have a look at the KongAmbassador and Gloo Ingress controllers. You can also use service meshes such as Istio API gateways, but you should be careful.

Table of content:

In Kubernetes, an Ingress is a component that routes the traffic from outside the cluster to your services and Pods inside the cluster.

In simple terms, the Ingress works as a reverse proxy or a load balancer: all external traffic is routed to the Ingress and then is routed to the other components.

Ingress as a load balancer

While the most popular ingress is the ingress-nginx project, there are several other options when it comes to selecting and using an Ingress.

You can choose from Ingress controllers that:

There are also other hybrid Ingress controllers that can integrate with existing cloud providers such as Zalando's Skipper Ingress.

When it comes to API gateways in Kubernetes, there are a few popular choices to select from.

Option #1 — The king of API Gateways: Kong

If you are building an API, you might be interested in what Kong Ingress has to offer.

Kong is an API gateway built on top of Nginx.

Kong is focused on API management and offers features such as authentication, rate limiting, retries, circuit breakers and more.

What's interesting about Kong is that it comes packaged as a Kubernetes Ingress.

So it could be used in your cluster as a gateway between your users and your backend services.

You can expose your API to external traffic with the standard Ingress object:

ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            backend:
              serviceName: api-service
              servicePort: 80

But there's more.

As part of the installation process, Kong's controller registers Custom Resource Definitions (CRDs).

One of these custom extensions is related to Kong's plugins.

If you wish to limit the requests to your Ingress by IP address, you can create a definition for the limit with:

limit-by-ip.yaml

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: rl-by-ip
config:
  hour: 100
  limit_by: ip
  second: 10
plugin: rate-limiting

And you can reference the limit with an annotation in your ingress with:

ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    plugins.konghq.com: rl-by-ip
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            backend:
              serviceName: api-service
              servicePort: 80

You can explore the Custom Resource Definitions (CRDs) for Kong on the official documentation.

But Kong isn't the only choice.

Option #2 — Ambassador, the modern API gateway

Ambassador is another Kubernetes Ingress built on top of Envoy that offers a robust API Gateway.

The Ambassador Ingress is a modern take on Kubernetes Ingress controllers, which offers robust protocol support as well as rate-limiting, an authentication API and observability integrations.

The main difference between Ambassador and Kong is that Ambassador is built for Kubernetes and integrates nicely with it.

Kong was open-sourced in 2015 when the Kubernetes ingress controllers weren't so advanced.

Even if Ambassador is designed with Kubernetes in mind, it doesn't leverage the familiar Kubernetes Ingress.

Instead, services are exposed to the outside world using annotations:

service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    service: api-service
  name: api-service
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v0
      kind: Mapping
      name: example_mapping
      prefix: /
      service: example.com:80
      host_rewrite: example.com
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80
  selector:
    service: api-backend

The novel approach is convenient because, in a single place, you can define all the routing for your Deployments and Pods.

However, having YAML as free text within an annotation could lead to errors and confusion.

It's hard to get the formatting right in standard YAML, let alone as a string inside more YAML.

If you wish to apply rate-limiting to your API, this is what it looks like in Ambassador.

You have a RateLimiting object that defines the requirements:

rate-limit.yaml

apiVersion: getambassador.io/v1beta1
kind: RateLimit
metadata:
 name: basic-rate-limit
spec:
 domain: ambassador
 limits:
  - pattern: [{x_limited_user: "false"}, {generic_key: "qotm"}]
    rate: 5
    unit: minute
  - pattern: [{x_limited_user: "true"}, {generic_key: "qotm"}]
    rate: 5
    unit: minute

You can reference the rate limit in your Service with:

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: api-service
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v1
      kind: RateLimitService
      name: basic-rate-limit
      service: "api-service:5000"
spec:
  type: ClusterIP
  selector:
    app: api-service
  ports:
    - port: 5000
      targetPort: http-api

Ambassador has an excellent tutorial about rate limiting, so if you are interested in using that feature, you can head over to Ambassador's official documentation.

You can extend Ambassador with custom filters for routing, but it doesn't offer a vibrant plugin ecosystem as Kong.

Option 3 — Gloo things together

Ambassador is not the only Envoy-powered ingress which can be used as API Gateway.

Gloo is a Kubernetes Ingress that is also an API gateway. It is capable of providing rate limiting, circuit breaking, retries, caching, external authentication and authorisation, transformation, service-mesh integration and security.

The selling point for Gloo is that it is capable of auto-discovering API endpoints for your application and automatically understands arguments and parameters.

It might be hard to believe (and sometimes their documentation doesn't help either), so here's an example.

Imagine you have a REST API for an address book.

The app exposes the following endpoints:

  • GET /users/{id}, get the profile for a user
  • GET /users, get all users
  • POST /users/find, find a particular user

If your API is developed using standard tools such as the OpenAPI, then Gloo automatically uses the OpenAPI definition to introspect your API and store the three endpoints.

If you list all the endpoint served by Gloo after the discovery phase, this is what you see:

gloo

upstreamSpec:
  kube:
    selector:
      app: addressbook
    serviceName: addressbook
    serviceNamespace: default
    servicePort: 8080
    serviceSpec:
      rest:
        swaggerInfo:
          url: http://addressbook.default.svc.cluster.local:8080/swagger.json
        transformations:
          findUserById:
            body:
              text: '{"id": {{ default(id, "") }}}'
            headers:
              :method:
                text: POST
              :path:
                text: /users/find
              content-type:
                text: application/json
          getUser:
            body: {}
            headers:
              :method:
                text: GET
              :path:
                text: /user/{{ default(id, "") }}
              content-length:
                text: '0'
              content-type: {}
              transfer-encoding: {}
          getUsers:
            body: {}
            headers:
              :method:
                text: GET
              :path:
                text: /users
              content-length:
                text: '0'
              content-type: {}
              transfer-encoding: {}

Once Gloo has a list of endpoints, you can use that list to apply transformations to the incoming requests before they reach the backend.

As an example, you may want to collect all the headers from the incoming requests and add them to the JSON payload before the request reaches the app.

Or you could expose a JSON API and let Gloo apply a transformation to render the message as SOAP before it reaches a legacy component.

Being able to discover APIs and apply transformations makes Gloo particularly suitable for an environment with diverse technologies — or when you're in the middle of a migration from an old legacy system to a newer stack.

Gloo can discover other kinds of endpoints such as AWS Lambdas.

Which makes it the perfect companion when you wish to mix and match Kubernetes and serverless.

I've heard I could use Istio as an API gateway

What's the difference between an API gateway and a service mesh?

Aren't both doing the same thing?

Both offer:

  • traffic routing
  • authentication such as OAuth, JWT, etc.
  • rate-limiting
  • circuit breakers
  • retries
  • etc.

However, there's a distinction.

API gateways such as Kong and Ambassador are mostly focussed on handling external traffic and routing it inside the cluster.

External traffic is quite a broad label that includes things such as:

  • slow and fast clients and
  • well behaved and malicious users

In other words, API gateways are designed to protect your apps from the outside world.

API gateways focus on handling external traffic

Service meshes, instead, are mostly used to observe and secure applications within your infrastructure.

Typical uses of service meshes include:

  • monitoring and observing requests between apps
  • securing the connection between services using encryption (mutual TLS)
  • improving resiliency with circuit breakers, retries, etc.

Since service meshes are deployed alongside your apps, they benefit from:

  • low latency and high bandwidth
  • unlikely to be targeted for misuse by bad actors

In other words, a service mesh's primary purpose is to manage internal service-to-service communication, while an API Gateway is primarily meant for external client-to-service communication.

Service meshes focus on internal service-to-service communitcation
API gatewayService mesh
Exposes internal services to external clientsManages and controls the traffic inside the network
Maps external traffic to internal resourcesFocuses on brokering internal resources

But that doesn't mean that you can't use Istio as an API gateway.

What might stop you, though, is the fact that Istio's priority isn't to handle external traffic.

Let's have a look at an example.

It's common practice to secure your API calls behind an API gateway with JWT or OAuth authentication.

Istio offers JWT, but you have to inject custom code in Lua to make it work with OAuth.

On the other hand, Kong offers a plugin for that as this is a common request.

Enterprise API gateways such as Google Apigee include billing capabilities.

It's unlikely that those features will be replicated in a service mesh because the focus isn't on managing APIs.

What if you don't care about billing, can you still use a service mesh as an API gateway?

Yes, you can, and there's something else that you should know.

A general note on API gateways and service meshes

Depending on what you are trying to achieve, service meshes and API gateways could overlap significantly in functionality.

They might overlap even more in the future since every major API gateway vendor is expanding into service meshes.

And it would not be surprising to see more service meshes deciding to launch an API gateway as Istio did.

Recap

If you had to pick an API gateway for Kubernetes, which one should you use?

  • If you want a battle-tested API gateway, Kong is still your best option. It might not be shiniest but the documentation is excellent with plenty of resources online. It also has the most production mileage than any other gateway.
  • If you need a flexible API gateway that can play nicely with new and old infrastructure, you should have a look at Gloo. The ability to auto-discover APIs and transform requests is compelling.
  • If you want the simplicity of setting all the networking in your Services, you should consider Ambassador. It has excellent tutorials and documentation to get started. Be aware of the YAML indentation as a free string.

If you had to pick an API gateway or a service mesh, which one should you use?

Starting with an API gateway is still the best choice to secure your internal apps from external clients.

As the number of apps grow in size, you could explore how to leverage a service mesh to observe, monitor and secure the traffic between them.

More options

If neither Ambassador, Kong or Gloo is suitable for the API gateway that you had in mind, you should check out the following alternatives:

That's all folks

Do you have any recommendation when it comes to API Gateways on Kubernetes?

Let us know in an email or tweet us @learnk8s.

A special thank you goes to Irakli Natsvlishvili who offered some invaluable feedback and helped me put together the above table. Also, thanks to:

  • Idit Levine and Scott Weiss from the Solo.io team for answering my questions about the Gloo Ingress controller
  • Daniel Bryant from Datawire who kindly helped me understand Ambassador better
  • Marco Palladino from Kong Inc. for offering some detailed feedback about the article

If you enjoyed this article, you might find the following articles interesting:

  • Scaling Microservices with Message Queues, Spring Boot and Kubernetes. You should design your service so that even if it is subject to intermittent heavy loads, it continues to operate reliably. But how do you build such applications? And how do you deploy an application that scales dynamically?
  • Boosting your kubectl productivity. If you work with Kubernetes, then kubectl is probably one of your most-used tools. Whenever you spend a lot of time working with a specific tool, it is worth to get to know it very well and learn how to use it efficiently.

Design Patterns in API Gateways and Microservices

 For all the buzz about microservices and API gateways, finding specifics can prove surprisingly difficult. I am reminded of the cartoon by Sidney Harris where the first step of a complex mathematical formula is presented, then a miracle occurs, and the sudden appearance of the glorious solution prompts an observer to comment that perhaps we should be more explicit in step two.

Since these patterns solve problems that occur almost exclusively at scale, there is a distinct dearth of published articles that tackle some of the trickiest details of these implementations.

This article assumes that you are familiar with the benefits of microservices (smaller repositories, language-agnostic development, easier refactoring, et al) and that you understand the role of an API gateway as a facade in front of them. The goal of this article is to catalog some of the architectural patterns that come up as potential problems or solutions in the gateway + microservice landscape.

[Tweet “”There’s a dearth of information about the trickiest details of implementing microservices.””]

Identifying the Problems

For those of you eyeing the gateway + microservice architecture as a source of potential relief from the compound problem of a monolithic application, we may have some bad news: the benefits of the gateway + microservice solution may have been overly simplified in its sales pitch. You may need to overcome some significant challenges and be a bit more explicit in “step two.”

Top 7 cross-cutting application concerns

Even if your monolithic application is thoughtfully structured into packages and service classes, chances are good that there are certain aspects of your application’s design that will make its components difficult to slice into dedicated microservices without refactoring.

Why? Because often, your application’s components rely on functionality that is considered “global” in the scope of the application. That’s what made it easier to develop in the first place.

Here is a short list of the most common cross-cutting concerns in applications:

  • Authentication
  • Authorization
  • Sessions
  • Cookies
  • Cache
  • Logging
  • Dependencies on other services

Let’s look more closely at each item.

Authentication

Authentication in the gateway + microservice ecosystem is best handled by a service that produces either a JSON web token or some other auth token which can be included in subsequent requests. The token gets evaluated by the gateway (and only by the gateway) to determine whether a request is properly authenticated.

Authorization

Closely related to authentication, authorization in the gateway + microservice ecosystem should be possible using a token (eg, sent in a custom HTTP header). This task should be performed before a request is proxied through to any microservice. Think of this as the single responsibility principle: each microservice only cares about one thing, and that thing cannot also include a permissions check.

Sessions

As stated previously, the recommendation here is to avoid sessions in favor of tokens so you can avoid looking up user-specific data in your microservices. When needed, the gateway should pass session data (eg, from a decrypted token) along to the microservices.

Arguably, it is possible to pass only a session identifier and then let each microservice look up session data from the attached resource (eg, Redis) per the wise advice of the 12 Factor App.

That approach may make for easier refactoring in some cases, but not always. For example, PHP’ssession_start()and the resulting$_SESSIONsuperglobal cannot be used in a microservice when the session ID is dictated by the gateway. In that case, you will need to roll your own solution. And if you are going to reinvent a wheel, it would be best to do it in a single place (in the gateway) and spare the microservices from that bit of tedious busywork.

Cookies

Like sessions, cookies are best avoided by your microservices, and if needed, they are easier and cleaner to implement in the gateway. When absolutely required, microservices can emit cookies if the gateway is configured to proxy them, but then you may risk additional headaches trying to juggle the cookie domains.

Cache

There is no perfect solution for cache, so do not try to optimize your ecosystem prematurely. Ease into caching and start with small expiration times. Maintaining REST-friendly routes in your microservices will allow for simpler caching at higher levels (eg, Varnish).

Some use cases should consider the possibility of cached data being your model, ie, the source of truth. Event handlers and command-line services may help keep the cache updated.

Logging

Logging in a gateway + microservice ecosystem is best done using either a log aggregation service such as Loggly or by simply logging to stdout and then doing your own log aggregation.

A standardized logging format (eg, JSON with some required fields) is recommended. This will allow for consistent reporting across all components. Allow room in your log format for a request ID that can be passed from the gateway into each microservice so you can easily find log entries in any service that had a part in handling a specific request.

Dependencies on other services

Code reuse in a monolithic application is usually a good thing, but reusing services in a microservice architecture may not be a good idea. Rethinking your code as standalone services takes time, and the refactoring may mean that clients need to make more or different requests.

A primary goal for the system is robustness: each microservice should be as independent as possible, and they should not risk cascading failures because one service outage triggers another.

!Sign up for a free Codeship Account hbspt.cta.load(1169977, ‘964db6a6-69da-4366-afea-b129019aff07’, {});

Summary Role of the Gateway

After covering the most common cross-cutting concerns that should be handled at the gateway level, we should have a much clearer idea of what the gateway needs to do.

It should act as a gated proxy, enforcing authorization rules so that only appropriate requests are passed through to each microservice. Just remember that the exact implementation details here can affect the microservices significantly.

Communicating with Microservices

One of the biggest decisions to be made is how the gateway will interface with its microservices. Will they be installed as packages or plugins (an approach used by frameworks such as Seneca), or will communication with microservices be conducted exclusively via HTTP (as used by Amazon API Gateway)? How you answer that question will affect the structure of your microservices.

Structuring Microservices

After implementing several microservices, I started to observe certain tendencies in their structure. I’m hesitant to dub them “patterns,” partly out of fear of angering the purists, but also because many of the recommendations here boil down to best practices that have been enumerated elsewhere.

Consideration of the following items will help you get the most out of a gateway + microservice implementation.

The simplest example

For educational purposes, it is helpful to start out with a simple example: let us consider a microservice that returns a country’s full name when given its two-letter code. The data could be supplied by a single database table with no foreign keys required.

Per the advice of the 12 Factor App, we treat the database as a backing service and attach it as a resource. There are no other backing services required, only the database. We supply the database credentials in the environment.

An app this tiny is easy to set up, data migrations are trivial, and it is easy to test. This is a simple example of an “Application Model,” sometimes referred to as Backends-as-frontends.

Example requiring 2 backing services

Next, let’s consider something more substantial: a service that returns ferry travel times, but it checks against local weather conditions. Assuming the ferry routes are stored in a database, then we can attach it as a resource as we did in the previous example.

The weather information is read from a third-party API, so we must interface with it via an SDK or some sort of HTTP call. This example is trickier than the first because our service is dependent on another service. Since we don’t control the other service, however, the best we can do is fail gracefully if the weather API goes down and hope for good uptime.

More complex example

Sooner or later, you will find a use case in your application where you will consider the option to have one of your microservices depend on another. This approach is easy since services are developed as plugins, but it may have drawbacks when microservices are accessed via HTTP. Should that HTTP request go directly to the service, or should it go through the gateway to take advantage of its routes, auth, and versioning controls?

Regardless of how it’s done, debugging becomes more difficult, and there is a distinct risk of cascading failures. What are the alternatives?

How to avoid interdependencies

Consider the following alternatives if you find a situation where a dependency on another microservice seems tempting:

  • Refactor the client to make an additional request to the other microservice.
  • Simplify the microservice that was needed as a dependency. If you are trying to reuse it, then perhaps it has grown too complex. Microservices are perhaps the most valuable when they are so simple that there isn’t a significant benefit from their reuse.
  • Are you conflating a service with a model? If a microservice is dependent on another microservice because of data, then the one microservice should instead attach to the same data models as the other microservice, and the dependency can be avoided. If a microservice wants to reuse business logic, then you need to think about the best way to avoid repeating any of those rules. Perhaps the two microservices should be merged into one so the business logic can be reused without repetition.
  • Is this something that a command-line service or cron job can help simplify? Judicious use of notifications and backend tasks can help keep microservices clean and focused.

Other Structures: Aggregate in Gateway

Because “Application Models” may get convoluted (especially if they have multiple dependencies), some API gateways offer the possibility to aggregate data from different services, usually by adding code to the relevant route and/or controller. I have heard the idea of aggregating data in the gateway referred to as the “scatter-gather” approach.

The advantage to aggregating data in the gateway is that it spares the client from having to do arduous work assembling data from multiple requests. Likewise, it spares the microservices from having to juggle complex interactions.

This approach has a couple major drawbacks, however. If the thought of juggling multiple requests and applying business logic is undesirable in a client or in a microservice, it is probably even less so in the gateway. If your gateway performs aggregations, it will no longer be a simple proxy — it will in effect include business logic that would need to be carefully tested.

Many available gateways do not offer this feature, so relying on it will limit your choices of gateways or force you to write your own, an unattractive proposition for businesses that derive value from their services and not from their custom gateways.

Other Structures: Aggregate in Client

An alternative to aggregating data in the gateway is to aggregate data in the client. The most well-known implementation of this is perhaps GraphQL, but a thorough implementation of a JSON API service can accomplish something similar (see Stormpath’s presentation on Designing a Beautiful REST + JSON API). The end result is that the onus of making and merging multiple data requests is left to the client; the gateway and its microservices are allowed to remain simple and streamlined.

The advantage to this approach is that it keeps the client in charge, so versioning is usually simpler (because releasing a new version of the client is usually easier than releasing a new version of the server-side API). This approach may also avoid the problems of under- and over-fetching data.

Although this approach may rely on relatively new technology, its biggest potential drawback is with business logic. If any business logic is required to interpret and merge multiple requests, then you risk repeating those rules in multiple clients and therefore you risk having inconsistent behavior.

Summary

Hopefully this discussion about the landscape of the gateway + microservice architecture has illuminated some of the questions and concerns that often accompany it.

There isn’t a silver bullet solution that will meet everyone’s needs, but there are certain trends that are picking up momentum. Technologies such as Serverless and AWS Lambda promise to make microservices even more granular, and it’s hard to argue with the benefits of their simplicity and testability.

As you try out different solutions for different use cases following the guidelines in this article, keep your wits about you and beware of any solution that threatens to disrupt the simplicity and testability of your code. Just as with any architecture, repetition and practice with microservices will help you identify solutions that work.

[Tweet “”Design Patterns in API Gateways and Microservices” via @fireproofsocks”]


Microservices API Gateways vs. Traditional Enterprise API Gateways

 

A microservices API gateway is an API gateway designed to accelerate the development workflow of independent services teams. A microservices API gateway provides all the functionality for a team to independently publish, monitor, and update a microservice.

This focus on accelerating the development workflow is distinct from the purpose of traditional API gateways, which focus on the challenges of managing APIs. Over the past decade, organizations have worked to expose internal systems through well-defined APIs. The challenge of safely exposing hundreds or thousands of APIs to end-users (both internal and external) led to the emergence of API gateways. Over time, API gateways have become centralized, mission critical pieces of infrastructure that control access to these APIs.

In this article, we'll discuss how the difference in business objective (productivity vs management) results in a very different API gateway.

Microservices Organization

In a microservices organization, small teams of developers work independently from each other to rapidly deliver functionality to the customer. In order for each service team to work independently, with a productive workflow, a services team needs to be able to:

  1. Publish their service, so that others can use the service
  2. Monitor their service, to see how well it's working
  3. Test and update their service, so they can keep on improving the service

The team needs to do all of this without requiring assistance from another operations or platform team--as soon as a services team requires another team, they're no longer working independently, and this can lead to bottlenecks.

For service publication, a microservices API gateway provides a static address for consumers, and dynamically route requests to the appropriate service address. In addition, providing authentication and TLS termination for security are typical considerations in exposing a service to other consumers.

Understanding the end-user experience of a service is crucial to improving the service. For example, a software update could inadvertently impact the latency of certain requests. A microservices API gateway is well situated to collect key observability metrics on end-user traffic as it routes traffic to the end service.

A microservices API gateway also supports dynamically routing user requests to different service versions for canary testing. By routing a small fraction of end-user requests to a new version of a service, service teams can safely test the impact of new updates to a small subset of users.

Microservices API Gateways vs. Enterprise API Gateways

At first glance, the use case described above may be fulfilled with an enterprise-focused API gateway. While this may be true, the actual emphasis of enterprise API gateways and microservices API gateways are somewhat different:

Use caseTraditional Enterprise API gatewayMicroservices API gateway
Primary PurposeExpose, compose, and manage internal business APIsExpose and observe internal business services
Publishing FunctionalityAPI management team or service team registers / updates gateway via admin APIService team registers / updates gateway via declarative code as part of the deployment process
MonitoringAdmin and operations focused e.g. meter API calls per consumer, report errors (e.g. internal 5XX).Developer focused e.g. latency, traffic, errors, saturation
Handling and Debugging IssuesL7 error-handling (e.g. custom error page or payload). Run gateway/API with additional logging. Troubleshoot issue in staging environmentConfigure more detailed monitoring. Enable traffic shadowing and / or canarying
TestingOperate multiple environments for QA, Staging, and Production. Automated integration testing, and gated API deployment. Use client-driven API versioning for compatibility and stability (e.g. semver)Facilitate canary routing for dynamic testing (taking care with data mutation side effects). Use developer-driven service versioning for upgrade management
Local DevelopmentDeploy gateway locally (via installation script, Vagrant or Docker), and attempt to mitigate infrastructure differences with production. Use language-specific gateway mocking and stubbing frameworksDeploy gateway locally via service orchestration platform (e.g. Kubernetes)

Self-Service Publishing

A team needs to be able to publish a new service to customers without requiring an operations or API management team. This ability to self-service for deployment and publication enables the team to keep the feature release velocity high. While a traditional enterprise API gateway may provide a simple mechanism (e.g., REST API) for publishing a new service, in practice, the usage is often limited to the use of a dedicated team that is responsible for the gateway. The primary reason for limiting publication to a single team is to provide an additional (human) safety mechanism: an errant API call could have potentially disastrous effects on production.

Microservices API gateways utilize mechanisms that enable service teams to easily and safely publish new services, with the inherent understanding that the producing team are responsible for their service, and will fix an issue if one occurs. A microservices gateway provides configurable monitoring for issue detection, and provides hooks for debugging, such as inspecting traffic or traffic shifting/duplication.

Monitoring & Rate Limiting

A common business model for APIs is metering, where a consumer is charged different fees depending on API usage. Traditional enterprise API gateways excel in this use case: they provide functionality for monitoring per-client usage of an API, and the ability to limit usage when the client exceeds their quota.

A microservice gateway also requires monitoring and rate limiting, but for different reasons. Monitoring user-visible metrics such as throughput, latency, and availability, are important to ensure that new updates don't impact the end-user. Robust end-user metrics are critical to allowing rapid, incremental updates. Rate limiting is used to improve the overall resilience of a service. When a service is not responding as expected, an API gateway can throttle incoming requests to allow a service to recover and prevent a cascade failure.

Testing and Updates

A microservices application has multiple services, each of which is being independently updated. Automated pre-production testing of a moving target is necessary but not sufficient for microservices. Canary testing, where a small percentage of production traffic is routed to a new service version, is an important tool to help test an update. By limiting a new service version to a small percentage of users, the impact of a service failure is limited.

In a traditional enterprise API gateway, routing is used to isolate or compose/aggregate changing API versions. Automated pre-production testing and manual post-production verification and exploration are required.

Summary

Traditional enterprise API gateways are designed to solve the challenges of API management. While they may appear to solve some of the challenges of adopting microservices, the reality is that a microservices workflow creates a different set of requirements. Integrating a microservices API gateway into your development workflow empowers service teams to self-publish, monitor, and update their service, quickly and safely. This will enable your organization to ship software more rapidly, and with more reliability than ever before.

For further reading on how an API Gateway can accelerate continuous delivery, read this blog post.

Cache Design and patterns

 In this article  we will look at application design and how cache design will be helping to get data from back end quickly.  scope of this ...