Skip to content

fturizo/K8SPayaraDemo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Payara Micro 5 - Kubernetes Ready Demo

Small repository that contains code samples for testing the cluster capabilities of Payara Micro 5 when used within a Kubernetes cluster.

Important
This sample project has been tested on Kubernetes version 1.21 and superior

Sample Application

There is a sample application that exposes REST services to interact with cached data of users and simulate intensive CPU usage for the entire runtime in order to demonstrate how Kubernetes autoscaling works.

This application is configured to run using the Payara Micro Maven Plugin. To start the application, simply run:

> mvn clean install
> mvn payara-micro:start

Docker Image

In order to test the application locally, a new Docker image has to be build after having packaged the WAR application:

> docker build -t payara/cluster-demo .

To test this image, start a new container with the following command:

> docker run --rm -d -p 8080:8080 --name demo-cluster payara/cluster-demo

Kubernetes Steps

Metrics Server

It is recommended to test the Kubernetes cluster locally using either Docker for Desktop or Minikube.

Since this demo tests pod-autoscaling via a `HorizontalPodAutoScaler, a metrics server has to be configured first. Follow these steps:

  1. Download the latest release of the Metrics Server K8S components definition:

    curl https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
  2. Edit the YAML definition file to configure insecure TLS communication to the kubelets (Only done since this is a local test):

    ...
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      strategy:
        rollingUpdate:
          maxUnavailable: 0
      template:
        metadata:
          labels:
            k8s-app: metrics-server
        spec:
          containers:
          - args:
            - --cert-dir=/tmp
            - --secure-port=4443
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --kubelet-use-node-status-port
            - --kubelet-insecure-tls # <-- ADD THIS LINE
            - --metric-resolution=15s
            image: k8s.gcr.io/metrics-server/metrics-server:v0.6.0
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 3
              httpGet:
                path: /livez
                port: https
                scheme: HTTPS
              periodSeconds: 10
    ...
  3. Create the K8S components needed to establish the metrics server:

    > kubectl apply -f components.yaml

Cluster Setup

All object definitions are located in the k8s/ directory. Create the following objects in order:

  1. First create the role binding for the service account named default to the ClusterRole:view permission:

    > kubectl apply -f k8s/rbac.yaml
  2. Create the service to allow pods to find each other and allow their Payara Micro instances to cluster:

    > kubectl apply -f k8s/demo-service.yaml
  3. Create the deployment that will initialize 3 pods:

    > kubectl apply -f k8s/demo-deployment.yaml
  4. Finally, turn on the auto scaler:

    > kubectl apply -f k8s/demo-scaler.yaml

Testing AutoScaling

In order to test auto scaling, you can use the /simulate/busy REST endpoint of the application. Since pod endpoints are managed by the demo-service service we created previously, there is no way to direct requests to specific pods to trigger this endpoint.

However, you can use the kubectl port-forward to remap a pod’s exposed port to a port in the local machine to temporarily send a request to this endpoint:

> kubectl port-forward <POD-NAME> <LOCAL_PORT>:8080
> curl -X POST http://localhost:<LOCAL_PORT>/simulate/busy/start
Note
The second command should run in a separate terminal since the port-forwarding operation runs in the foreground. Once you are done running commands directly to the pod, you can stop this operation.

To signal the Payara Micro instance in the pod to stop the simulated CPU load, simply send a POST request to the /simulate/busy/stop endpoint.

About

Kubernetes + Payara Micro Demo

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published