This blog post goes into how you can scale up and down your Http Triggered application in Kubernetes based on requests using KEDA and Prometheus.
The application here is an Azure Function app but it can be any app that exposes a Http interface. Where appropriate below I point out where you can use your app instead of an Azure Function app.
Among the major cloud providers, the FaaS implementation in Azure, Azure Functions, is unique that its runtime is open-source. The runtime and your code can therefore be deployed to a custom container or deployed on your own infrastructure including Kubernetes.
To enable scaling of a function app (or any other workload) in Kubernetes we at Azure (along with RedHat) built KEDA, Kubernetes Event Driven Autoscaling. With the combination of aforementioned runtime and KEDA you can run and scale your Azure Functions in your own Kubernetes cluster. Currently in KEDA we support more than twenty different message event sources including Kafka, RabbitMQ, NATS, Azure Queue, AWS SQS Queue, GCP Pub Sub etc. However, there is no support for Http request based scaling. This post outlines one approach on how you can scale a Http Trigerred function app in Kubernetes using the Prometheus KEDA scaled object and an Ingress Controller.
Overview
The basic idea is that we will deploy an Ingress Controller in this case the NGINX Ingress Controller and have all HTTP traffic to your function app go through it. We use Prometheus to track the incoming request metrics on the Ingress Controller. Finally, we use KEDA's Prometheus based scaler to scale up/down the function app deployment.
Walkthrough
Prerequisites
- A Kubernetes cluster which has the ability to install a Service with a Load Balancer (usually any cloud provider). The steps below were tested using an AKS cluster.
- kubectl pointing to your Kubernetes cluster
- Helm to install the artifacts. All of the artifacts below use Helm3
- Azure Functions core tools
- docker installed locally and a Docker Hub account.
Steps
Create a namespace for your ingress resources
kubectl create namespace ingress-nginx
Install the NGINX-Ingress ingress controller
Use Helm to deploy an NGINX ingress controller and enable metrics and set the right annotations for Prometheus. The ingress controller is installed as service with LoadBalancer type. In addition, a backend Service and a metrics Service are also deployed.
helm install ingress-controller stable/nginx-ingress \ --namespace ingress-nginx \ --set controller.replicaCount=2 \ --set controller.metrics.enabled=true \ --set controller.podAnnotations."prometheus\.io/scrape"="true" \ --set controller.podAnnotations."prometheus\.io/port"="10254" kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-controller-nginx-ingress-controller LoadBalancer 10.0.14.166 40.70.230.xxx 80:31036/TCP,443:32179/TCP 31s ingress-controller-nginx-ingress-controller-metrics ClusterIP 10.0.240.199 <none> 9913/TCP 31s ingress-controller-nginx-ingress-default-backend ClusterIP 10.0.63.133 <none> 80/TCP 31s
The ingress controller is exposed via the EXTERNAL-IP 40.70.230.xxx above. Also have a look at the following page for instructions on how to install it for various configurations.
Optionally - Create a DNS entry pointing to your ingress controller. For AKS, you can get a cloudapp.azure.com address using the procedure here. In the steps below the fqdn configured is "function-helloworld.eastus2.cloudapp.azure.com"Deploy Prometheus to monitor the NGINX ingress Controller
kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... prometheus-server NodePort 10.0.38.50 <none> 9090:30860/TCP 34s kubectl -n ingress-nginx get pods NAME READY STATUS RESTARTS AGE .. prometheus-server-86cd54f9d5-9xxh7 1/1 Running 0 95s
Note: If you happen to use any other namespace other than "ingress-nginx" then you need to go and change the namespace in the yaml files from here and then deploy.
Deploy KEDA
helm install keda kedacore/keda --namespace ingress-nginx kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ... keda-operator-697b98dcdd-8zdrk 2/2 Running 0 5d17h
Deploy a function app to the Kubernetes cluster
Create a Python Http Trigerred function app, generate the required docker file and finally deploy the function app to the cluster.func init --worker-runtime python func new --template "HttpTrigger" --name helloworld func init --docker-only func kubernetes deploy --name function-helloworld --namespace ingress-nginx --service-type ClusterIP --registry anirudhgarg
Note that the authentication mode has to be changed to anonymous for now while we are working to support function keys. Navigate to the function app folder and open the function.json file. Find the following line: "authLevel": "function" and change the authLevel to "anonymous"
--name is the name of your Deployment. --registry points to your DockerHub registry and you have to be logged in to docker and connected to your account locally. See more here
Note: Instead of using a function app you can deploy your own app that listens to Http requests. Just make sure you create a k8s Cluster IP Service pointing to your deployment.
Deploy an Ingress Resource pointing to the deployed function app Service.
This is how the YAML looks like:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: function-helloworld namespace: ingress-nginx annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - host: <replace-with-host-name-pointing-to-ingress-controller> http: paths: - backend: serviceName: function-helloworld-http servicePort: 80 path: /helloworld(/|$)(.*)
You can find an example here
The serviceName attribute is the name of the Service for the function app. host should point to the fqdn configured pointing to the Ingress Controller. You can also choose a random name here but a host has to be configured otherwise Prometheus monitoring of the Ingress Resource will not work. A path has also been configured with prefix "helloworld".
Create an Ingress Resource with NGINX Ingress Controller annotations pointing to the Prometheus Service
This is how the YAML looks like:apiVersion: extensions/v1beta1 kind: Ingress metadata: name: prometheus-service namespace: ingress-nginx annotations: kubernetes.io/ingress.class: nginx spec: rules: - http: paths: - backend: serviceName: prometheus-server servicePort: 9090 path: /
The serviceName attribute is the name of the Service for the Promtheus Server.
kubectl apply -f "https://raw.githubusercontent.com/anirudhgarg/azurefunction-k8s-http/master/prom-ingress.yml"
Deploy the KEDA Prometheus Scaled object which inturn monitors the NGINX Ingress Controller
kubectl apply -f "https://raw.githubusercontent.com/anirudhgarg/azurefunction-k8s-http/master/keda-prom.yml"
This is how the YAML looks like:
apiVersion: keda.k8s.io/v1alpha1 kind: ScaledObject metadata: name: prometheus-scaledobject namespace: ingress-nginx labels: deploymentName: function-helloworld-http spec: scaleTargetRef: deploymentName: function-helloworld-http pollingInterval: 15 cooldownPeriod: 30 minReplicaCount: 1 maxReplicaCount: 10 triggers: - type: prometheus metadata: serverAddress: http://prometheus-server.ingress-nginx.svc.cluster.local:9090 metricName: access_frequency threshold: '1' query: sum(rate(nginx_ingress_controller_requests[1m]))
deploymentName is the name of the function app Deployment, the pollingInterval is how frequently in seconds does KEDA poll Prometheus, we have a minimum of 1 pod (minReplicaCount) and the maximum scale out is 10 pods (maxReplicaCount). query is pointing to the prometheus query which tracks the metrics to incoming requests to the ingress controller in the last minute. Since the threshold is '1' the function app will scale as long as the number of requests/minute > 60.
Test !
The function app is now listening on:
http://function-helloworld.eastus2.cloudapp.azure.com/helloworld/api/helloworld?name=anirudh
Note that if you did not create a domain name pointing to your Ingress Controller then you might need to use curl --resolve or its equivalent to invoke the function app
```
curl -v function-helloworld.eastus2.cloudapp.azure.com/helloworld/api/helloworld?name=anirudh
* Trying 40.70.230.199...
* TCP_NODELAY set
* Connected to function-helloworld.eastus2.cloudapp.azure.com (40.70.230.199) port 80 (#0)
> GET /helloworld/api/helloworld?name=anirudh HTTP/1.1
> Host: function-helloworld.eastus2.cloudapp.azure.com
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: openresty/1.15.8.2
< Date: Mon, 13 Jan 2020 01:47:30 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 14
< Connection: keep-alive
<
Hello anirudh!* Connection #0 to host function-helloworld.eastus2.cloudapp.azure.com left intact
```
Now you can use your favorite Http requests tool and send requests at a high enough rate to triger the scale out. (I used a tool called k6)
```
kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
function-helloworld-http-6ccd9c9bbf-6f6d7 0/1 ContainerCreating 0 0s
function-helloworld-http-6ccd9c9bbf-98bzq 1/1 Running 0 15s
function-helloworld-http-6ccd9c9bbf-dcdwc 0/1 ContainerCreating 0 0s
function-helloworld-http-6ccd9c9bbf-fr7hq 0/1 ContainerCreating 0 0s
function-helloworld-http-6ccd9c9bbf-k9lhn 1/1 Running 0 6d20h
function-helloworld-http-6ccd9c9bbf-mfp4c 1/1 Running 0 15s
function-helloworld-http-6ccd9c9bbf-v7g47 0/1 ContainerCreating 0 0s
function-helloworld-http-6ccd9c9bbf-x9l2t 1/1 Running 0 15s
ingress-controller-nginx-ingress-controller-6c9f7486d4-27vjq 1/1 Running 0 7d1h
ingress-controller-nginx-ingress-controller-6c9f7486d4-b8ddr 1/1 Running 0 7d1h
ingress-controller-nginx-ingress-default-backend-df57464f-lvqmj 1/1 Running 0 7d1h
keda-operator-697b98dcdd-8zdrk 2/2 Running 0 6d18h
prometheus-server-86cd54f9d5-9xxh7 1/1 Running 0 7d1h
```
After a while if there are no further requests the function pods will scale back down to 1. Note that we are only scaling down to 1 here.
Hope you found this useful. Please try it out and let me know in the comments or send me a tweet if this approach worked for you or not. We are looking to streamline this process further as we go forward.
Acknowledgments for inspiration - Autoscaling Kubernetes apps with Prometheus and KEDA post by Abhishek Gupta, and to OpenFaaS which also uses Prometheus metrics for request based scaling.
No comments:
Post a Comment