In this workflow scenario, you'll set up a simple non-secure (no authn, authz or encryption) Confluent Platform, consisting of all components.
The goal for this scenario is for you to:
- Quickly set up the complete Confluent Platform on the Kubernetes.
- Configure a producer to generate sample data.
Watch the walkthrough: Quickstart Demonstration
Before continuing with the scenario, ensure that you have set up the prerequisites.
To complete this scenario, you'll follow these steps:
- Set the current tutorial directory.
- Deploy Confluent For Kubernetes.
- Deploy Confluent Platform.
- Deploy the Producer application.
- Tear down Confluent Platform.
Set the tutorial directory for this tutorial under the directory you downloaded the tutorial files:
export TUTORIAL_HOME=<Tutorial directory>/quickstart-deploy
Create the namespace to use.
kubectl create namespace confluent
Set this namespace to default for your Kubernetes context.
kubectl config set-context --current --namespace confluent
Set up the Helm Chart:
helm repo add confluentinc https://packages.confluent.io/helm
Install Confluent For Kubernetes using Helm:
helm upgrade --install confluent-operator confluentinc/confluent-for-kubernetes --namespace confluent
Check that the Confluent For Kubernetes pod comes up and is running:
kubectl get pods
You install Confluent Platform components as custom resources (CRs).
You can configure all Confluent Platform components as custom resources. In this
tutorial, you will configure all components in a single file and deploy all
components with one kubectl apply
command.
The entire Confluent Platform is configured in one configuration file:
$TUTORIAL_HOME/confluent-platform.yaml
In this configuration file, there is a custom Resource configuration spec for each Confluent Platform component - replicas, image to use, resource allocations.
For example, the Kafka section of the file is as follows:
--- apiVersion: platform.confluent.io/v1beta1 kind: Kafka metadata: name: kafka namespace: confluent spec: replicas: 3 image: application: confluentinc/cp-server:7.8.0 init: confluentinc/confluent-init-container:2.10.0 dataVolumeCapacity: 10Gi metricReporter: enabled: true dependencies: zookeeper: endpoint: zookeeper.confluent.svc.cluster.local:2181 ---
Deploy Confluent Platform with the above configuration:
kubectl apply -f $TUTORIAL_HOME/confluent-platform.yaml
Note: If you are deploying a single node dev cluster, then use this yaml file:
kubectl apply -f $TUTORIAL_HOME/confluent-platform-singlenode.yaml
Check that all Confluent Platform resources are deployed:
kubectl get confluent
Get the status of any component. For example, to check Kafka:
kubectl describe kafka
Now that we've got the infrastructure set up, let's deploy the producer client app.
The producer app is packaged and deployed as a pod on Kubernetes. The required
topic is defined as a KafkaTopic custom resource in
$TUTORIAL_HOME/producer-app-data.yaml
.
The $TUTORIAL_HOME/producer-app-data.yaml
defines the elastic-0
topic as follows:
apiVersion: platform.confluent.io/v1beta1 kind: KafkaTopic metadata: name: elastic-0 namespace: confluent spec: replicas: 3 # change to 1 if using single node partitionCount: 1 configs: cleanup.policy: "delete"
Deploy the producer app:
kubectl apply -f $TUTORIAL_HOME/producer-app-data.yaml
Note: If you are deploying a single node dev cluster, then use this yaml file:
kubectl apply -f $TUTORIAL_HOME/producer-app-data-singlenode.yaml
Use Control Center to monitor the Confluent Platform, and see the created topic and data.
Set up port forwarding to Control Center web UI from local machine:
kubectl port-forward controlcenter-0 9021:9021
Browse to Control Center:
http://localhost:9021
Check that the
elastic-0
topic was created and that messages are being produced to the topic.
Shut down Confluent Platform and the data:
kubectl delete -f $TUTORIAL_HOME/producer-app-data.yaml
kubectl delete -f $TUTORIAL_HOME/confluent-platform.yaml
helm uninstall confluent-operator