- Date:
- Contact:
What is your organization? (name, description, mission)
Do you consider your group or organization a βtraditionalβ HPC site?
How many researchers does your organization have? or how many is your group supporting?
What areas of research are you primarily supporting using Kubernetes? (e.g. bioinformatics, physics, generic research platform)
Are your researchers consuming Kubernetes directly (deploying Kubernetes themselves / using kubectl to access etc)? or do they consume services deployed on top of it?
Do you allow your users to deploy their own applications from the internet or other sources? If so, do you have some sort of vetting process?
What motivated you to explore Kubernetes?
Were you using containers previously before looking at Kubernetes?
How important is workload portability?
Are you using Kubernetes to support βnewβ applications or workloads? or are you migrating traditional workloads?
If you are migrating to Kubernetes, how are you planning to migrate your traditional workloads?
What sort of pain points have you experienced with this migration?
What have been the benefits of switching to Kubernetes? or what is it enabling you to do?
How are you handling training your users to work with Kubernetes?
What phase would you consider your Kubernetes environment is in? (e.g. Investigating, Development, Production)
How many Kubernetes clusters do you run? What size are they? (total number of nodes, total core, total ram count)
Where are you running your clusters? on-prem? a Cloud Provider? a mixture of both? If on-prem, are you running them on bare-metal?
What version(s) of Kubernetes are you running?
What distribution of Kubernetes are you running (Openshift, Rancher etc)?
What container runtime(s) are you using?
What CNI driver(s) are you using?
How are you provisioning your cluster(s)?
Do you provide persistent storage? If so, how?
If you are exposing cluster services externally, how? (service type
LoadBalancer
, ingress etc.)
In any of your clusters, are you using devices (e.g. GPUs)? If so, what are they?
What types of workloads are you deploying on top of Kubernetes? HPC, HTC, or services such as Argo, or JupyterHub?
Does your group run other workload managers? (Slurm, HTCondor etc) If so, what size cluster(s) do you have supporting those other workload managers?
Are you considering running another workload manager on top of Kubernetes? or pursuing running them natively on Kubernetes?
Have you integrated with more traditional environments? e.g. posix/parallel shared file systems etc?
Do you run or are looking to run restricted data workloads? (HIPAA, PCI etc)
Is multi-tenancy important to you?
If you have a multi-tenant environment, what methods are you using to separate your workloads? (e.g. separate clusters / separate namespaces)
How are you handling authentication and user management?
How are you handling access control? RBAC? ABAC? Additional tooling?
Have you explored multi-cluster or federated deployments? If you are actively using it, what route did you go with which tools?
(Here we use the word "workflow" to mean batch-style data analysis workflows, not business process workflows)
Do your researchers run workflows at your site?
Does your site support particular workflow technologies such as specific frameworks, runners, platforms, and/or standards?
In the future, do you plan to directly support particular workflow technologies such as specific frameworks, runners, platforms, and/or standards?