Skip to content

Kubernetes (k8s) device plugin to enable registration of AMD GPU to a container cluster

License

Notifications You must be signed in to change notification settings

ROCm/k8s-device-plugin

Repository files navigation

AMD GPU device plugin for Kubernetes

Go Report Card

Introduction

This is a Kubernetes device plugin implementation that enables the registration of AMD GPU in a container cluster for compute workload. With the appropriate hardware and this plugin deployed in your Kubernetes cluster, you will be able to run jobs that require AMD GPU.

More information about ROCm.

Prerequisites

Limitations

  • This plugin targets Kubernetes v1.18+.

Deployment

The device plugin needs to be run on all the nodes that are equipped with AMD GPU. The simplest way of doing so is to create a Kubernetes DaemonSet, which runs a copy of a pod on all (or some) Nodes in the cluster. We have a pre-built Docker image on DockerHub that you can use for your DaemonSet. This repository also has a pre-defined yaml file named k8s-ds-amdgpu-dp.yaml. You can create a DaemonSet in your Kubernetes cluster by running this command:

kubectl create -f k8s-ds-amdgpu-dp.yaml

or directly pull from the web using

kubectl create -f https://raw.githubusercontent.com/ROCm/k8s-device-plugin/master/k8s-ds-amdgpu-dp.yaml

If you want to enable the experimental device health check, please use k8s-ds-amdgpu-dp-health.yaml after --allow-privileged=true is set for kube-apiserver.

Helm Chart

If you want to deploy this device plugin using Helm, a Helm Chart is available via Artifact Hub.

Example workload

You can restrict workloads to a node with a GPU by adding resources.limits to the pod definition. An example pod definition is provided in example/pod/alexnet-gpu.yaml. This pod runs the timing benchmark for AlexNet on AMD GPU and then goes to sleep. You can create the pod by running:

kubectl create -f alexnet-gpu.yaml

or

kubectl create -f https://raw.githubusercontent.com/ROCm/k8s-device-plugin/master/example/pod/alexnet-gpu.yaml

and then check the pod status by running

kubectl describe pods

After the pod is created and running, you can see the benchmark result by running:

kubectl logs alexnet-tf-gpu-pod alexnet-tf-gpu-container

For comparison, an example pod definition of running the same benchmark with CPU is provided in example/pod/alexnet-cpu.yaml.

Labelling node with additional GPU properties

Please see AMD GPU Kubernetes Node Labeller for details. An example configuration is in k8s-ds-amdgpu-labeller.yaml:

kubectl create -f k8s-ds-amdgpu-labeller.yaml

or

kubectl create -f https://raw.githubusercontent.com/ROCm/k8s-device-plugin/master/k8s-ds-amdgpu-labeller.yaml

Notes

  • This plugin uses go modules for dependencies management
  • Please consult the Dockerfile on how to build and use this plugin independent of a docker image

TODOs

  • Add proper GPU health check (health check without /dev/kfd access.)