Showing posts with label azure function. Show all posts
Showing posts with label azure function. Show all posts

Monday, September 14, 2020

Deploying Apps to Heterogenous Infra : AKS and Azure Functions

The need for scaling applications based on demand and usage has given rise to infrastructure services like kubernetes(K8s) and serverless paragidm (includes serverless functions). Intially, the applications were built with a single infrastructure in focus. Either it was instances only with auto-scaling or kubernetes environment or pure serverless applications. But as the application architectures’ have evolved, the services are being deployed across a combination of environments like kubernetes plus serverless functions or instances plus kubernetes.

In this blog, I will focus on configuring and deploying a microservices application across Azure Kubernetes Service and Azure function.

In this blog, I will review:

1. Introduction to Azure Functions
2. Configuring and Writing Azure Functions
3. (Re)Architecting for Serverless functions
4. Integrating with application services running on K8s

Introduction to Azure Functions

Azure functions is an event driven compute platform similar to AWS lambda. While AWS Lambda has just one hosting plan (deployment model) which is fully managed and deployed on AWS propreitary backend. Azure functions takes a slightly different approach to this. It provides with various Hosting Plan options.

a. Consumption Plan - This is similar to the AWS Lambda model. It is fully managed and has no fixed-cost or management overhead with it. The downside of this is that there might be some delay in start up of the function if it has been idle for a while.
b. Premium Plan - It provides pre-warmed workers with no delay even if the functions were idle. It achieves this by reserving capacity for your functions. With this plan you can also deploy the functions within your Virtual Network (VNET) for private access and unlimited execution duration. This plan also allows you to run these functions as docker container on kubernetes. The downside is higher cost.
c. Azure App Service Plan - This plan provides predictable pricing for those organization who are very price sensitive. The downside is that it has limited auto scaling behavior.

In AWS Lambda, the functions receives a JSON object as an input and can return a JSON as output. With Azure functions, you can add triggers and bindings. A trigger is the event that the function will listen to. Once the data is processed by the function, it can push the data to an output source which is output binding. All triggers and bindings have a direction. For triggers the direction is always in. For input and output bindings you can use in and out.

Now, let’s look at how to configure and write Azure functions.

Configuring and Writing Azure Functions

In this example, I will be using the Acme Fit Demo app. This app has 6 sevices - Front End, Users/Auth, Catalog, Cart, Order and Payment. I will modify the cart service (Python) from containerized application to Azure Functions (With Consumption plan). The rest of the services will be deployed on Azure Kubernetes Service(AKS) cluster.

Architecting for Serverless functions

There is an ongoing debate in the serverless community on how to structure your serverless application. But there are 2 main schools of thought:

1. Bundle as many functionalities as possible within a single function.
2. Every API route should be it’s own function.

While there are pros and cons to each method, I will use method 2. This primarily because when the functions would need less time to warm up and start (less code and libraries) and also makes it easier to handle request and not overburden only 1 single function.

I have decomposed the CART service, API routes into multiple functions, namely:

  • addCartItem
  • clearCart
  • getCartItems
  • getCartTotal
  • modifyCartItem

Each of the above functions corresponds to one of the CRUD operations on Cart items.

You can find the Azure Functions code here.

Configure Azure Functions

To configure the function, you can either use the Azure Portal or the Azure CLI. I will use a combination of the 2 to explain the setup process better.

1. Navigate to Azure portal [LINK HERE] and then select Compute > Function App.
Then click on Create.

You will also notice other hosting options, but for the purposes of this blog, I will be using the consumption plan. You can also enable Application Insights under Monitoring tab. Click on Review and Create.

create_function_app

After this verify the summary again and click on create.

review_and_create

2. I have selected python as the language for writing the Azure Functions. Most of the functionalities for creating and managing python based azure functions have to be done from the CLI, as of Jan 2020.

To get started with this, these are the following pre-reqs:

  • Python 3.7.4 - 64 bit. (Python 3.7.4 is verified with Azure Functions; Python 3.8 and later versions are not yet supported.)
  • Virtual Env (Recommended not mandatory)
  • The Azure Functions Core Tools version 2.7.1846 or a later.
  • The Azure CLI version 2.0.76 or later.

3. Activate the virtual environment and initialize the function with the same name used in the Azure portal.

python -m venv .venv

source .venv/bin/activate

func init <Azure FUNC NAME> --python

4. Create a new function with a reference template. In our case, we will leverage the HTTP trigger template.

func new --name addCartItem --template "HTTP trigger"

This will create a new directory with the function name, init.py file and a function.json file.

The directory structure should look something like this after ALL the functions have been created.

├── LICENSE
├── README.md
├── addCartItem
│   ├── __init__.py
│   └── function.json
├── clearCart
│   ├── __init__.py
│   └── function.json
├── getCartItems
│   ├── __init__.py
│   └── function.json
├── getCartTotal
│   ├── __init__.py
│   └── function.json
├── host.json
├── local.settings.json
├── modifyCartItem
│   ├── __init__.py
│   └── function.json
└── requirements.txt

The init.py contains the main() function for python. This file is triggered based on the description in the function.json file as shown below. The function.json is a config file that defines triggers, input and output bindings (Discussed in previous sections)

{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "route":"cart/item/add/{userid}",
      "methods": [
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "$return"
    }
  ]
}

The config file above instucts the Azure function to be triggered on a HTTP call with POST method to the route, “cart/item/add/{userid}”. It also defines which file needs to be triggered upon invocation. In our case it is init.py. You can modify the trigger to a webhook by updating function.json to the following:

{
  "bindings": [
    {
      "type": "httpTrigger",
      "direction": "in",
      "webHookType": "github"
    },
    {
      "type": "http",
      "direction": "out"
    }
  ]
}

5. Once this is done, add your code under the main() function in init.py file. Check this repo for sample code. You can copy and paste one of the functions to see it in action.

Also, update the host.json under the root folder for the project, if needed. The Azure functions will by default append /api to your function URL. In my host.json configuration below, I have removed the route prefix.

{
  "version": "2.0",
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[1.*, 2.0.0)"
  },
  "extensions": {
    "http": {
      "routePrefix": ""
    }
  }
}

6. Next step is to test the function locally. Just run,

func start

Copy the URL displayed and test it using POSTMAN or curl.

func_start_local

7. In some scenarios, your app might be interacting with other services like Databases. In case of the cart service, I am also using Azure Cache for Redis.

For the functions of cart service to work, they would need access to the REDIS_HOST, REDIS_PORT and REDIS_PASSWORD environment variable.

You can set these variable in the Azure portal. To do so, Navigate to the FUNCTION > Configuration and under Application Settings you may add these variable names and their values.

add_env

For local testing, you can set the env variable on you local system or use the local.settings.json

NOTE: If you are adding any kind of secrets to the local.settings.json file, remember to add it to .gitignore file.

8. Once the app is tested locally, you can push it to Azure.

func azure functionapp publish <FUNCTION APP NAME>

publish_func_1

Function URLs after publishing to Azure.

publish_func_2

Integrating with application services running on K8s

Use the URL generated in previous section within your app. Notice that all of these functions have an additional parameter named "code=<SOME CODE>". This is the default code that is generated for your function. You can add or update this code within Azure portal under FUNCTION_NAME > Manage > Function Keys.

This keeps the function URL safe from unwanted API calls and acts as additional authorization.

The front-end service in ACME Fit Demo app, routes the traffic to other services like user, cart, catalog and others. When deployed on AKS the front-end can route traffic based on app labels and dns of the service except for the Cart service. As our cart service is deployed as function, we need to explicitly provide the endpoint for the Cart service. Additionally, the function key (the code=) param for every function must be passed to it.

Conclusion

Serverless paradigm is gaining a lot of traction because of it’s ease of use and low cost of maintenance along with scale. When using heterogenous infrastructure, one has to ensure reachability and availabilty of these services at all times. Complexities usually arise because of permissions issue as well as network connectivity. With Azure functions, there are various hosting plans available to handle each of these scenarios.

Saturday, August 29, 2020

8 5 6 Scale a HTTP Triggered app up and down in Kubernetes using KEDA and Prometheus


This blog post goes into how you can scale up and down your Http Triggered application in Kubernetes based on requests using KEDA and Prometheus.

The application here is an Azure Function app but it can be any app that exposes a Http interface. Where appropriate below I point out where you can use your app instead of an Azure Function app.

Among the major cloud providers, the FaaS implementation in Azure, Azure Functions, is unique that its runtime is open-source. The runtime and your code can therefore be deployed to a custom container or deployed on your own infrastructure including Kubernetes.

To enable scaling of a function app (or any other workload) in Kubernetes we at Azure (along with RedHat) built KEDA, Kubernetes Event Driven Autoscaling. With the combination of aforementioned runtime and KEDA you can run and scale your Azure Functions in your own Kubernetes cluster. Currently in KEDA we support more than twenty different message event sources including Kafka, RabbitMQ, NATS, Azure Queue, AWS SQS Queue, GCP Pub Sub etc. However, there is no support for Http request based scaling. This post outlines one approach on how you can scale a Http Trigerred function app in Kubernetes using the Prometheus KEDA scaled object and an Ingress Controller.

Overview

The basic idea is that we will deploy an Ingress Controller in this case the NGINX Ingress Controller and have all HTTP traffic to your function app go through it. We use Prometheus to track the incoming request metrics on the Ingress Controller. Finally, we use KEDA's Prometheus based scaler to scale up/down the function app deployment.

HttpScale

Walkthrough

Prerequisites

  1. A Kubernetes cluster which has the ability to install a Service with a Load Balancer (usually any cloud provider). The steps below were tested using an AKS cluster.
  2. kubectl pointing to your Kubernetes cluster
  3. Helm to install the artifacts. All of the artifacts below use Helm3
  4. Azure Functions core tools
  5. docker installed locally and a Docker Hub account.

Steps

  1. Create a namespace for your ingress resources

    kubectl create namespace ingress-nginx
    
  2. Install the NGINX-Ingress ingress controller

    Use Helm to deploy an NGINX ingress controller and enable metrics and set the right annotations for Prometheus. The ingress controller is installed as service with LoadBalancer type. In addition, a backend Service and a metrics Service are also deployed.

    helm install ingress-controller stable/nginx-ingress \     
        --namespace ingress-nginx \
        --set controller.replicaCount=2 \
        --set controller.metrics.enabled=true \
        --set controller.podAnnotations."prometheus\.io/scrape"="true" \
        --set controller.podAnnotations."prometheus\.io/port"="10254"
    
    kubectl -n ingress-nginx get svc
    NAME                                                  TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
    ingress-controller-nginx-ingress-controller           LoadBalancer   10.0.14.166    40.70.230.xxx   80:31036/TCP,443:32179/TCP   31s
    ingress-controller-nginx-ingress-controller-metrics   ClusterIP      10.0.240.199   <none>          9913/TCP                     31s
    ingress-controller-nginx-ingress-default-backend      ClusterIP      10.0.63.133    <none>          80/TCP                       31s
    

    The ingress controller is exposed via the EXTERNAL-IP 40.70.230.xxx above. Also have a look at the following page for instructions on how to install it for various configurations.

    Optionally - Create a DNS entry pointing to your ingress controller. For AKS, you can get a cloudapp.azure.com address using the procedure here. In the steps below the fqdn configured is "function-helloworld.eastus2.cloudapp.azure.com"

  3. Deploy Prometheus to monitor the NGINX ingress Controller

    kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/    
    
    kubectl -n ingress-nginx get svc
    NAME                                                  TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
    ...
    prometheus-server                                     NodePort       10.0.38.50     <none>          9090:30860/TCP               34s
    
    kubectl -n ingress-nginx get pods
    NAME                                                              READY   STATUS    RESTARTS   AGE
    ..
    prometheus-server-86cd54f9d5-9xxh7                                1/1     Running   0          95s
    

    Note: If you happen to use any other namespace other than "ingress-nginx" then you need to go and change the namespace in the yaml files from here and then deploy.

  4. Deploy KEDA

    helm install keda kedacore/keda --namespace ingress-nginx     
    
    kubectl get pods -n ingress-nginx
    NAME                                                              READY   STATUS    RESTARTS   AGE
    ...
    keda-operator-697b98dcdd-8zdrk                                    2/2     Running   0          5d17h
    
  5. Deploy a function app to the Kubernetes cluster
    Create a Python Http Trigerred function app, generate the required docker file and finally deploy the function app to the cluster.

    func init --worker-runtime python
    func new --template "HttpTrigger" --name helloworld
    func init --docker-only
    func kubernetes deploy --name function-helloworld --namespace ingress-nginx --service-type ClusterIP --registry anirudhgarg
    

    Note that the authentication mode has to be changed to anonymous for now while we are working to support function keys. Navigate to the function app folder and open the function.json file. Find the following line: "authLevel": "function" and change the authLevel to "anonymous"

    --name is the name of your Deployment. --registry points to your DockerHub registry and you have to be logged in to docker and connected to your account locally. See more here

    Note: Instead of using a function app you can deploy your own app that listens to Http requests. Just make sure you create a k8s Cluster IP Service pointing to your deployment.

  6. Deploy an Ingress Resource pointing to the deployed function app Service.

    This is how the YAML looks like:

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: function-helloworld
      namespace: ingress-nginx
      annotations:
        kubernetes.io/ingress.class: nginx
        nginx.ingress.kubernetes.io/rewrite-target: /$2    
    spec:  
      rules:  
        - host: <replace-with-host-name-pointing-to-ingress-controller>
          http:
            paths:                
            - backend:
                serviceName: function-helloworld-http
                servicePort: 80
              path: /helloworld(/|$)(.*)
    

    You can find an example here

    The serviceName attribute is the name of the Service for the function app. host should point to the fqdn configured pointing to the Ingress Controller. You can also choose a random name here but a host has to be configured otherwise Prometheus monitoring of the Ingress Resource will not work. A path has also been configured with prefix "helloworld".

  7. Create an Ingress Resource with NGINX Ingress Controller annotations pointing to the Prometheus Service

    This is how the YAML looks like:

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: prometheus-service
      namespace: ingress-nginx
      annotations:
        kubernetes.io/ingress.class: nginx    
    spec:
      rules:  
        - http:
            paths:
            - backend:
                serviceName: prometheus-server
                servicePort: 9090
              path: /
    

    The serviceName attribute is the name of the Service for the Promtheus Server.

    kubectl apply -f "https://raw.githubusercontent.com/anirudhgarg/azurefunction-k8s-http/master/prom-ingress.yml"    
    
  8. Deploy the KEDA Prometheus Scaled object which inturn monitors the NGINX Ingress Controller

    kubectl apply -f "https://raw.githubusercontent.com/anirudhgarg/azurefunction-k8s-http/master/keda-prom.yml"    
    

    This is how the YAML looks like:

    apiVersion: keda.k8s.io/v1alpha1
    kind: ScaledObject
    metadata:
    name: prometheus-scaledobject
    namespace: ingress-nginx
    labels:
        deploymentName: function-helloworld-http
    spec:
    scaleTargetRef:
        deploymentName: function-helloworld-http
    pollingInterval: 15
    cooldownPeriod:  30
    minReplicaCount: 1
    maxReplicaCount: 10
    triggers:
    - type: prometheus
      metadata:
        serverAddress: http://prometheus-server.ingress-nginx.svc.cluster.local:9090
        metricName: access_frequency
        threshold: '1'
        query: sum(rate(nginx_ingress_controller_requests[1m]))
    

    deploymentName is the name of the function app Deployment, the pollingInterval is how frequently in seconds does KEDA poll Prometheus, we have a minimum of 1 pod (minReplicaCount) and the maximum scale out is 10 pods (maxReplicaCount). query is pointing to the prometheus query which tracks the metrics to incoming requests to the ingress controller in the last minute. Since the threshold is '1' the function app will scale as long as the number of requests/minute > 60.

  9. Test !
    The function app is now listening on:
    http://function-helloworld.eastus2.cloudapp.azure.com/helloworld/api/helloworld?name=anirudh

Note that if you did not create a domain name pointing to your Ingress Controller then you might need to use curl --resolve or its equivalent to invoke the function app

```
 curl -v function-helloworld.eastus2.cloudapp.azure.com/helloworld/api/helloworld?name=anirudh
* Trying 40.70.230.199...
* TCP_NODELAY set
* Connected to function-helloworld.eastus2.cloudapp.azure.com (40.70.230.199) port 80 (#0)
> GET /helloworld/api/helloworld?name=anirudh HTTP/1.1
> Host: function-helloworld.eastus2.cloudapp.azure.com
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: openresty/1.15.8.2
< Date: Mon, 13 Jan 2020 01:47:30 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 14
< Connection: keep-alive
<
Hello anirudh!* Connection #0 to host function-helloworld.eastus2.cloudapp.azure.com left intact
```

Now you can use your favorite Http requests tool and send requests at a high enough rate to triger the scale out. (I used a tool called k6)

```
kubectl -n ingress-nginx get pods
NAME                                                              READY   STATUS              RESTARTS   AGE
function-helloworld-http-6ccd9c9bbf-6f6d7                         0/1     ContainerCreating   0          0s
function-helloworld-http-6ccd9c9bbf-98bzq                         1/1     Running             0          15s
function-helloworld-http-6ccd9c9bbf-dcdwc                         0/1     ContainerCreating   0          0s
function-helloworld-http-6ccd9c9bbf-fr7hq                         0/1     ContainerCreating   0          0s
function-helloworld-http-6ccd9c9bbf-k9lhn                         1/1     Running             0          6d20h
function-helloworld-http-6ccd9c9bbf-mfp4c                         1/1     Running             0          15s
function-helloworld-http-6ccd9c9bbf-v7g47                         0/1     ContainerCreating   0          0s
function-helloworld-http-6ccd9c9bbf-x9l2t                         1/1     Running             0          15s
ingress-controller-nginx-ingress-controller-6c9f7486d4-27vjq      1/1     Running             0          7d1h
ingress-controller-nginx-ingress-controller-6c9f7486d4-b8ddr      1/1     Running             0          7d1h
ingress-controller-nginx-ingress-default-backend-df57464f-lvqmj   1/1     Running             0          7d1h
keda-operator-697b98dcdd-8zdrk                                    2/2     Running             0          6d18h
prometheus-server-86cd54f9d5-9xxh7                                1/1     Running             0          7d1h
```

After a while if there are no further requests the function pods will scale back down to 1. Note that we are only scaling down to 1 here.

Hope you found this useful. Please try it out and let me know in the comments or send me a tweet if this approach worked for you or not. We are looking to streamline this process further as we go forward.

Acknowledgments for inspiration - Autoscaling Kubernetes apps with Prometheus and KEDA post by Abhishek Gupta, and to OpenFaaS which also uses Prometheus metrics for request based scaling.