In this workshop we will learn what KEDA is, how it works, what are the built-in scalers, and how to build an External scaler specific to our needs.
- Docker Desktop
- Minikube or Desktop Desktop with Kubernetes enabled
- Kubectl
- Helm
- .NET Core 6.0
There are several ways to install KEDA, the simplest one is to use the Helm charts.
- Add Helm repo
helm repo add kedacore https://kedacore.github.io/charts
- Update Helm repo
helm repo update
- Install KEDA Helm chart
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda
For other options check KEDA's deployment documentation.
Clone https://github.com/kedacore/sample-go-rabbitmq
Follow the instructions of the sample until you reach "Deploying a RabbitMQ consumer" so we can discuss the deploy/deploy-consumer.yaml
file.
This is the deployment that we target for scaling, it can be the deployment that consumes messages from a queue for example.
In our case it is just a simple web app sample that is provided by microsoft. We are not going to worry about exposing it with a Service; the purpose of the workshop is just to show how this deployment will scale up and down based on our External scaler.
To deploy the target app:
kubectl apply -f my-scaler/yaml/target-deployment.yaml
External scalers are containers that implement provide gRPC endpoints. So let's create a gRPC .NET Core app from the
dotnet new grpc -n my-scaler
- Add the file
externalscaler.proto
from https://github.com/kedacore/keda/blob/main/pkg/scalers/externalscaler/externalscaler.proto to the foldermy-scaler/Protos
- Include the file we just created in the gRPC code generation by adding the following line to the .csproj file:
<Protobuf Include="Protos\externalscaler.proto" GrpcServices="Server" />
- Run
dotnet build
to generate the base gRPC code - Create a file
ExternalScalerService.cs
under Services folder, we will build it gradually together. To save time, you can copy the file from this repo if you want to jump to its final state. - Add the following line in the
Program.cs
file:
app.MapGrpcService<ExternalScalerService>();
Note: If you're using previous .NET Core versions, instead you might need to add this to the Startup.cs
file in the UseEndpoints
section:
endpoints.MapGrpcService<ExternalScalerService>();
- Add the following line to th
Program.cs
file:
builder.Services.AddHttpClient();
Note: if you're using previous .NET Core versions, instead you might need to add this to the Startup.cs
file in the ConfigureServices
method
services.AddHttpClient();
- Create a
Dockerfile
file, and a.dockerignore
file (it's important not to forget this one, you can copy the content from the repo) - Build the image by running:
docker build . -t my-scaler-image
Feel free to choose the image name you like, however remember to use it all the way down after this point.
Let's create a Deployment and a Service to run our scaler and service requests to. From the root of this repo copy the file my-scaler/yaml/my-scaler-deployment.yaml
, and then run:
kubectl apply -f my-scaler-deployment.yaml
Note: You can use port forward to troubleshoot the gRPC service and use a tool like BloomRPC to detect if your service is working properly:
kubectl port-forward service/my-scaler-service 3333:80
In this step we are going to create a fake http endpoint. Our scaler will query this endpoint, and it will return an integer that we will use as the fake criteria on which we are going to scale our deployment on.
In reality this might be a length of queue of a technology that does not have a built-in support in KEDA, or number of logged in users...etc.
-
In this repo, open the file
mock-server/mockserver-config/static/initializerJson.json
and create a new endpoint that returns an integer in a string format. Let's call itfake
. -
Create the namespace "mockserver":
kubectl create namespace mockserver
- Navigate to the folder
mock-server
and then run the following command to create a configmpa from which the mockserver will read the configuration.
helm upgrade --install --namespace mockserver mockserver-config mockserver-config
- Create a deployment to run the mockserver itself. run the following:
helm upgrade --install --namespace mockserver --set app.mountConfigMap=true --set app.mountedConfigMapName=mockserver-config --set app.propertiesFileNamem=mockserver.properties --set app.initializationJsonFileName=initializerJson.json mockserver mockserver
- If you want to change the configuration to experiment the scaling up and down, run the following command to restart the mockserve and force it to take the new config values:
helm upgrade --install --namespace mockserver mockserver-config mockserver-config
kubectl rollout restart deploy/mockserver -n mockserver
ScaledObject is the kubernetes resource (specific to KEDA) that will tell KEDA to scale our target deployment based on the configuration within. You copy the content of the file from my-scaler/yaml/scaled-config.yaml
. And then run:
kubectl apply -f scaled-config.yaml
If everything is setup right, and fake endpoint returns the right value, then watch your target deployment scaling out to many pods.