Small repository that contains code samples for testing the cluster capabilities of Payara Micro 5 when used within a Kubernetes cluster.
Important
|
This sample project has been tested on Kubernetes version 1.21 and superior |
There is a sample application that exposes REST services to interact with cached data of users and simulate intensive CPU usage for the entire runtime in order to demonstrate how Kubernetes autoscaling works.
This application is configured to run using the Payara Micro Maven Plugin. To start the application, simply run:
> mvn clean install
> mvn payara-micro:start
In order to test the application locally, a new Docker image has to be build after having packaged the WAR application:
> docker build -t payara/cluster-demo .
To test this image, start a new container with the following command:
> docker run --rm -d -p 8080:8080 --name demo-cluster payara/cluster-demo
It is recommended to test the Kubernetes cluster locally using either Docker for Desktop or Minikube.
Since this demo tests pod-autoscaling via a `HorizontalPodAutoScaler, a metrics server has to be configured first. Follow these steps:
-
Download the latest release of the Metrics Server K8S components definition:
curl https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
-
Edit the YAML definition file to configure insecure TLS communication to the kubelets (Only done since this is a local test):
... apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --kubelet-insecure-tls # <-- ADD THIS LINE - --metric-resolution=15s image: k8s.gcr.io/metrics-server/metrics-server:v0.6.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 ...
-
Create the K8S components needed to establish the metrics server:
> kubectl apply -f components.yaml
All object definitions are located in the k8s/
directory. Create the following objects in order:
-
First create the role binding for the service account named
default
to theClusterRole:view
permission:> kubectl apply -f k8s/rbac.yaml
-
Create the service to allow pods to find each other and allow their Payara Micro instances to cluster:
> kubectl apply -f k8s/demo-service.yaml
-
Create the deployment that will initialize 3 pods:
> kubectl apply -f k8s/demo-deployment.yaml
-
Finally, turn on the auto scaler:
> kubectl apply -f k8s/demo-scaler.yaml
In order to test auto scaling, you can use the /simulate/busy
REST endpoint of the application. Since pod endpoints are managed by the demo-service
service we created previously, there is no way to direct requests to specific pods to trigger this endpoint.
However, you can use the kubectl port-forward
to remap a pod’s exposed port to a port in the local machine to temporarily send a request to this endpoint:
> kubectl port-forward <POD-NAME> <LOCAL_PORT>:8080
> curl -X POST http://localhost:<LOCAL_PORT>/simulate/busy/start
Note
|
The second command should run in a separate terminal since the port-forwarding operation runs in the foreground. Once you are done running commands directly to the pod, you can stop this operation. |
To signal the Payara Micro instance in the pod to stop the simulated CPU load, simply send a POST
request to the /simulate/busy/stop
endpoint.