Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce workspaces to Pipelines. #1866

Merged
merged 1 commit into from Jan 22, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 86 additions & 3 deletions docs/pipelineruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,9 +270,92 @@ internally generated persistent volume claims.

## Workspaces

It is not yet possible to specify [workspaces](tasks.md#workspaces) via `Pipelines`
or `PipelineRuns`, so `Tasks` requiring `workspaces` cannot be used with them until
[#1438](https://github.com/tektoncd/pipeline/issues/1438) is completed.
Workspaces allow PVC, emptyDir, configmap and secret volume sources to be
easily wired into tasks and pipelines.

For a `PipelineRun` to execute [a Pipeline that declares `workspaces`](pipelines.md#workspaces),
at runtime you need to map the `workspace` names to actual physical volumes.
This is managed through the `PipelineRun`'s `workspaces` field. Values in `workspaces` are
[`Volumes`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/),
however currently we only support a subset of `VolumeSources`:

_If you need support for a `VolumeSource` not listed here
[please open an issue](https://github.com/tektoncd/pipeline/issues) or feel free to
[contribute a PR](https://github.com/tektoncd/pipeline/blob/master/CONTRIBUTING.md)._

* [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
* [`persistentVolumeClaim`](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim)
* [`configMap`](https://kubernetes.io/docs/concepts/storage/volumes/#configmap)
* [`secret`](https://kubernetes.io/docs/concepts/storage/volumes/#secret)

If the `workspaces` declared in the Pipeline are not provided at runtime, the `PipelineRun` will fail
with an error.

For example to provide an existing PVC called `mypvc` for a `workspace` called
`myworkspace` declared by the `Pipeline`, using the `my-subdir` folder which already exists
on the PVC (there will be an error if it does not exist):

```yaml
workspaces:
- name: myworkspace
persistentVolumeClaim:
claimName: mypvc
subPath: my-subdir
```

An [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) can also be used for
this with the following caveats:

1. An `emptyDir` volume type is not _shared_ amongst Tasks. Instead each Task will simply receive a
new `emptyDir` of its own from its underlying Pod.

```yaml
workspaces:
- name: myworkspace
emptyDir: {}
```

A [`configMap`](https://kubernetes.io/docs/concepts/storage/volumes/#configmap) can also be used
as a workspace with the following caveats:

1. ConfigMap volume sources are always mounted as read-only inside a task's
containers - tasks cannot write content to them and a step may error out
and fail the task if a write is attempted.
2. The ConfigMap you want to use as a workspace must already exist prior
to the TaskRun being submitted.
3. ConfigMaps have a [size limit of 1MB](https://github.com/kubernetes/kubernetes/blob/f16bfb069a22241a5501f6fe530f5d4e2a82cf0e/pkg/apis/core/validation/validation.go#L5042)

To use a [`configMap`](https://kubernetes.io/docs/concepts/storage/volumes/#configmap)
as a `workspace`:

```yaml
workspaces:
- name: myworkspace
configmap:
name: my-configmap
```

A [`secret`](https://kubernetes.io/docs/concepts/storage/volumes/#secret) can also be used as a
workspace with the following caveats:

1. Secret volume sources are always mounted as read-only inside a task's
containers - tasks cannot write content to them and a step may error out
and fail the task if a write is attempted.
2. The Secret you want to use as a workspace must already exist prior
to the TaskRun being submitted.
3. Secrets have a [size limit of 1MB](https://github.com/kubernetes/kubernetes/blob/f16bfb069a22241a5501f6fe530f5d4e2a82cf0e/pkg/apis/core/validation/validation.go#L4933)

To use a [`secret`](https://kubernetes.io/docs/concepts/storage/volumes/#secret)
as a `workspace`:

```yaml
workspaces:
- name: myworkspace
secret:
secretName: my-secret
```

_For a complete example see [workspace.yaml](../examples/pipelineruns/workspace.yaml)._

## Cancelling a PipelineRun

Expand Down
58 changes: 55 additions & 3 deletions docs/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,61 @@ spec:

### Declared Workspaces

It is not yet possible to specify [workspaces](tasks.md#workspaces) via `Pipelines`
or `PipelineRuns`, so `Tasks` requiring `workspaces` cannot be used with them until
[#1438](https://github.com/tektoncd/pipeline/issues/1438) is completed.
`workspaces` are a way of declaring volumes you expect to be made available to your
executing `Pipeline` and its `Task`s. They are similar to [`volumes`](#volumes) but
allow you to enforce at runtime that the volumes have been attached and
[allow you to specify subpaths](taskruns.md#workspaces) in the volumes to attach.

Any `Pipeline` using a `Task` that declares a workspace will need to provide one at
runtime. Doing so requires two additions in a Pipeline:

1. The `Pipeline` will need to declare a list of `workspaces` that `PipelineRun`s will be
expected to provide. This is done with the `workspaces` field in the `Pipeline`'s spec.
Each entry in that list must have a unique name.
2. When a `Pipeline` refers to a `Task` requiring workspaces, one of the named workspaces
from (1) will need to be provided. The workspace name needs to be mapped from the name
given to it by the pipeline to the name expected by the task.

In total this looks as follows:

```yaml
spec:
workspaces:
- name: pipeline-ws1 # The name of a workspace provided by PipelineRuns
tasks:
- name: use-ws-from-pipeline
taskRef:
name: gen-code # gen-code task expects a workspace be provided with name "output"
workspaces:
- name: output
workspace: pipeline-ws1
- name: use-ws-again
taskRef:
name: commit # commit task expects a workspace be provided with name "src"
workspaces:
- name: src
workspace: pipeline-ws1
```

This will tell Tekton to take whatever workspace is provided by the PipelineRun
with name "pipeline-ws1" and wire it into the "output" workspace expected by
the gen-code task. The same workspace will then also be wired into the "src" workspace
expected by the commit task. If the workspace provided by the PipelineRun is a
persitent volume claim then we have successfully shared files between the two tasks!

#### Workspaces Don't Imply Task Ordering (Yet)

One usecase for workspaces in `Pipeline`s is to provide a PVC to multiple `Task`s
and have one or some write to it before the others read from it. This kind of behaviour
relies on the order of the `Task`s - one writes, the next reads, and so on - but this
ordering is not currently enforced by Tekton. This means that `Task`s which write to a
PVC may be run at the same time as `Task`s expecting to read that data. In the worst case
this can result in deadlock behaviour where multiple `Task`'s pods are all attempting
to mount a PVC for writing at the same time.

To avoid this situation `Pipeline` authors can explicitly declare the ordering of `Task`s
sharing a PVC-backed workspace by using the `runAfter` field. See [the section on
`runAfter`](#runAfter) for more information about using this field.

### Parameters

Expand Down
6 changes: 4 additions & 2 deletions docs/taskruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,17 +232,17 @@ at runtime you need to map the `workspaces` to actual physical volumes with
* [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
* [`persistentVolumeClaim`](https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim)
* [`configMap`](https://kubernetes.io/docs/concepts/storage/volumes/#configmap)
* [`secret`](https://kubernetes.io/docs/concepts/storage/volumes/#secret)

_If you need support for a `VolumeSource` not listed here
[please open an issue](https://github.com/tektoncd/pipeline/issues) or feel free to
[contribute a PR](https://github.com/tektoncd/pipeline/blob/master/CONTRIBUTING.md)._


If the declared `workspaces` are not provided at runtime, the `TaskRun` will fail
with an error.

For example to provide an existing PVC called `mypvc` for a `workspace` called
`myworkspace` declared by the `Pipeline`, using the `my-subdir` folder which already exists
`myworkspace` declared by the `Task`, using the `my-subdir` folder which already exists
on the PVC (there will be an error if it does not exist):

```yaml
Expand All @@ -268,6 +268,7 @@ containers - tasks cannot write content to them and a step may error out
and fail the task if a write is attempted.
2. The ConfigMap you want to use as a workspace must already exist prior
to the TaskRun being submitted.
3. ConfigMaps have a [size limit of 1MB](https://github.com/kubernetes/kubernetes/blob/f16bfb069a22241a5501f6fe530f5d4e2a82cf0e/pkg/apis/core/validation/validation.go#L5042)

To use a [`configMap`](https://kubernetes.io/docs/concepts/storage/volumes/#configmap)
as a `workspace`:
Expand All @@ -286,6 +287,7 @@ containers - tasks cannot write content to them and a step may error out
and fail the task if a write is attempted.
2. The Secret you want to use as a workspace must already exist prior
to the TaskRun being submitted.
3. Secrets have a [size limit of 1MB](https://github.com/kubernetes/kubernetes/blob/f16bfb069a22241a5501f6fe530f5d4e2a82cf0e/pkg/apis/core/validation/validation.go#L4933)

To use a [`secret`](https://kubernetes.io/docs/concepts/storage/volumes/#secret)
as a `workspace`:
Expand Down
136 changes: 136 additions & 0 deletions examples/pipelineruns/workspaces.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# In this contrived example 3 different kinds of workspace volume are used to thread
# data through a pipeline's tasks.
# 1. A ConfigMap is used as source of recipe data.
# 2. A Secret is used to store a password.
# 3. A PVC is used to share data from one task to the next.
#
# The end result is a pipeline that first checks if the password is correct and, if so,
# copies data out of a recipe store onto a shared volume. The recipe data is then read
# by a subsequent task and printed to screen.
apiVersion: v1
kind: ConfigMap
metadata:
name: sensitive-recipe-storage
data:
brownies: |
1. Heat oven to 325 degrees F
2. Melt 1/2 cup butter w/ 1/2 cup cocoa, stirring smooth.
3. Remove from heat, allow to cool for a few minutes.
4. Transfer to bowl.
5. Whisk in 2 eggs, one at a time.
6. Stir in vanilla.
7. Separately combine 1 cup sugar, 1/4 cup flour, 1 cup chopped
walnuts and pinch of salt
8. Combine mixtures.
9. Bake in greased pan for 30 minutes. Watch carefully for
appropriate level of gooeyness.
---
apiVersion: v1
kind: Secret
metadata:
name: secret-password
type: Opaque
data:
password: aHVudGVyMg==
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-task-storage
spec:
resources:
requests:
storage: 16Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
---
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: fetch-secure-data
spec:
workspaces:
- name: super-secret-password
- name: secure-store
- name: filedrop
steps:
- name: fetch-and-write
image: ubuntu
script: |
if [ "hunter2" = "$(cat $(workspaces.super-secret-password.path)/password)" ]; then
cp $(workspaces.secure-store.path)/recipe.txt $(workspaces.filedrop.path)
else
echo "wrong password!"
exit 1
fi
---
apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
name: print-data
spec:
workspaces:
- name: storage
readOnly: true
inputs:
params:
- name: filename
steps:
- name: print-secrets
image: ubuntu
script: cat $(workspaces.storage.path)/$(inputs.params.filename)
---
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
name: fetch-and-print-recipe
spec:
workspaces:
- name: password-vault
- name: recipe-store
- name: shared-data
tasks:
- name: fetch-the-recipe
taskRef:
name: fetch-secure-data
workspaces:
- name: super-secret-password
workspace: password-vault
- name: secure-store
workspace: recipe-store
- name: filedrop
workspace: shared-data
- name: print-the-recipe
taskRef:
name: print-data
# Note: this is currently required to ensure order of write / read on PVC is correct.
runAfter:
- fetch-the-recipe
params:
- name: filename
value: recipe.txt
workspaces:
- name: storage
workspace: shared-data
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
generateName: recipe-time-
spec:
pipelineRef:
name: fetch-and-print-recipe
workspaces:
- name: password-vault
secret:
secretName: secret-password
- name: recipe-store
configMap:
name: sensitive-recipe-storage
items:
- key: brownies
path: recipe.txt
- name: shared-data
persistentVolumeClaim:
claimName: shared-task-storage
9 changes: 9 additions & 0 deletions pkg/apis/pipeline/v1alpha1/pipeline_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,10 @@ type PipelineSpec struct {
// Params declares a list of input parameters that must be supplied when
// this Pipeline is run.
Params []ParamSpec `json:"params,omitempty"`
// Workspaces declares a set of named workspaces that are expected to be
// provided by a PipelineRun.
// +optional
Workspaces []WorkspacePipelineDeclaration `json:"workspaces,omitempty"`
}

// Check that Pipeline may be validated and defaulted.
Expand Down Expand Up @@ -123,6 +127,11 @@ type PipelineTask struct {
// Parameters declares parameters passed to this task.
// +optional
Params []Param `json:"params,omitempty"`

// Workspaces maps workspaces from the pipeline spec to the workspaces
// declared in the Task.
// +optional
Workspaces []WorkspacePipelineTaskBinding `json:"workspaces,omitempty"`
}

func (pt PipelineTask) HashKey() string {
Expand Down
Loading