Skip to content

Commit

Permalink
Put Tekton OCI bundles behind a feature flags 🎏
Browse files Browse the repository at this point in the history
In order to expose Tekton OCI bundle less and mark them as "alpha"
still, let's put their usage under a feature-flag. This will allow us
to experiment, refactor and enhance those without having to support
them nor expose them too much to the end-user.

Enabling the feature gates is a explicit choice for the user and is
documented to make sure users understand this is subject to changes.

This adds a new feature-flags field called "enable-tekton-oci-bundles"
that defaults to false.
If the feature-flag is off, Tekton OCI bundle are not usable. The
admission controller will disallow its usage, and the controller will
not take the bundle field into account.

Note: the e2e tests will be skip on the CI temporarly because we do
not have the *framework* in place to switch feature-flags during
tests. This will be worked out in parallel.

Signed-off-by: Vincent Demeester <[email protected]>
  • Loading branch information
vdemeester committed Nov 5, 2020
1 parent 7b5b2fa commit bf3749d
Show file tree
Hide file tree
Showing 18 changed files with 386 additions and 254 deletions.
4 changes: 4 additions & 0 deletions config/config-feature-flags.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,7 @@ data:
# See https://github.com/tektoncd/pipeline/issues/2981 for more
# info.
require-git-ssh-secret-known-hosts: "false"
# Setting this flag to "true" enables the use of Tekton OCI bundle.
# This is an experimental feature and thus should still be considered
# an alpha feature.
enable-tekton-oci-bundles: "false"
16 changes: 12 additions & 4 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ data:
## Configuring self-signed cert for private registry
The `SSL_CERT_DIR` is set to `/etc/ssl/certs` as the default cert directory. If you are using a self-signed cert for private registry and the cert file is not under the default cert directory, configure your registry cert in the `config-registry-cert` `ConfigMap` with the key `cert`.
The `SSL_CERT_DIR` is set to `/etc/ssl/certs` as the default cert directory. If you are using a self-signed cert for private registry and the cert file is not under the default cert directory, configure your registry cert in the `config-registry-cert` `ConfigMap` with the key `cert`.
## Customizing basic execution parameters
Expand Down Expand Up @@ -303,7 +303,7 @@ file lists the keys you can customize along with their default values.
To customize the behavior of the Pipelines Controller, modify the ConfigMap `feature-flags` as follows:

- `disable-affinity-assistant` - set this flag to `true` to disable the [Affinity Assistant](./workspaces.md#specifying-workspace-order-in-a-pipeline-and-affinity-assistants)
that is used to provide Node Affinity for `TaskRun` pods that share workspace volume.
that is used to provide Node Affinity for `TaskRun` pods that share workspace volume.
The Affinity Assistant is incompatible with other affinity rules
configured for `TaskRun` pods.

Expand All @@ -326,7 +326,7 @@ for each `Step` that does not have its working directory explicitly set with `/w
For more information, see the [associated issue](https://github.com/tektoncd/pipeline/issues/1836).

- `running-in-environment-with-injected-sidecars`: set this flag to `"true"` to allow the
Tekton controller to set the `tekton.dev/ready` annotation at pod creation time for
Tekton controller to set the `tekton.dev/ready` annotation at pod creation time for
TaskRuns with no Sidecars specified. Enabling this option should decrease the time it takes for a TaskRun to
start running. However, for clusters that use injected sidecars e.g. istio
enabling this option can lead to unexpected behavior.
Expand All @@ -335,7 +335,15 @@ enabling this option can lead to unexpected behavior.
Git SSH Secrets include a `known_hosts` field. This ensures that a git remote server's
key is validated before data is accepted from it when authenticating over SSH. Secrets
that don't include a `known_hosts` will result in the TaskRun failing validation and
not running.
not running.

- `enable-tekton-oci-bundles`: set this flag to `"true"` to enable the
tekton OCI bundle usage (see [the tekton bundle
contract](./tekton-bundle-contracts.md)). Enabling this option
allows the use of `bundle` field in `taskRef` and `pipelineRef` for
`Pipeline`, `PipelineRun` and `TaskRun`. By default, this option is
disabled (`"false"`), which means it is disallowed to use the
`bundle` field.

For example:

Expand Down
57 changes: 31 additions & 26 deletions docs/pipelineruns.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ A `PipelineRun` definition supports the following fields:
object that supplies specific execution credentials for the `Pipeline`.
- [`serviceAccountNames`](#mapping-serviceaccount-credentials-to-tasks) - Maps specific `serviceAccountName` values
to `Tasks` in the `Pipeline`. This overrides the credentials set for the entire `Pipeline`.
- [`taskRunSpec`](#specifying-task-run-specs) - Specifies a list of `PipelineRunTaskSpec` which allows for setting `ServiceAccountName` and [`Pod` template](./podtemplates.md) for each task. This overrides the `Pod` template set for the entire `Pipeline`.
- [`taskRunSpec`](#specifying-task-run-specs) - Specifies a list of `PipelineRunTaskSpec` which allows for setting `ServiceAccountName` and [`Pod` template](./podtemplates.md) for each task. This overrides the `Pod` template set for the entire `Pipeline`.
- [`timeout`](#configuring-a-failure-timeout) - Specifies the timeout before the `PipelineRun` fails.
- [`podTemplate`](#pod-template) - Specifies a [`Pod` template](./podtemplates.md) to use as the basis
for the configuration of the `Pod` that executes each `Task`.
Expand All @@ -70,7 +70,7 @@ A `PipelineRun` definition supports the following fields:

### Specifying the target `Pipeline`

You must specify the target `Pipeline` that you want the `PipelineRun` to execute, either by referencing
You must specify the target `Pipeline` that you want the `PipelineRun` to execute, either by referencing
an existing `Pipeline` definition, or embedding a `Pipeline` definition directly in the `PipelineRun`.

To specify the target `Pipeline` by reference, use the `pipelineRef` field:
Expand All @@ -81,22 +81,6 @@ spec:
name: mypipeline

```

You may also use a `Tekton Bundle` to reference a pipeline defined remotely.

```yaml
spec:
pipelineRef:
name: mypipeline
bundle: docker.io/myrepo/mycatalog:v1.0
```
The syntax and caveats are similar to using `Tekton Bundles` for `Task` references
in [Pipelines](pipelines.md#tekton-bundles) or [TaskRuns](taskruns.md#tekton-bundles).

`Tekton Bundles` may be constructed with any toolsets that produce valid OCI image artifacts
so long as the artifact adheres to the [contract](tekton-bundle-contracts.md).

To embed a `Pipeline` definition in the `PipelineRun`, use the `pipelineSpec` field:

```yaml
Expand Down Expand Up @@ -156,6 +140,27 @@ spec:
...
```

#### Tekton Bundles

**Note: This is only allowed if `enable-tekton-oci-bundles` is set to
`"true"` in the `feature-flags` configmap, see [`install.md`](./install.md#customizing-the-pipelines-controller-behavior)**

You may also use a `Tekton Bundle` to reference a pipeline defined remotely.

```yaml
spec:
pipelineRef:
name: mypipeline
bundle: docker.io/myrepo/mycatalog:v1.0
```

The syntax and caveats are similar to using `Tekton Bundles` for `Task` references
in [Pipelines](pipelines.md#tekton-bundles) or [TaskRuns](taskruns.md#tekton-bundles).

`Tekton Bundles` may be constructed with any toolsets that produce valid OCI image artifacts
so long as the artifact adheres to the [contract](tekton-bundle-contracts.md).


## Specifying `Resources`

A `Pipeline` requires [`PipelineResources`](resources.md) to provide inputs and store outputs
Expand Down Expand Up @@ -223,7 +228,7 @@ to all `persistentVolumeClaims` generated internally.
You can specify `Parameters` that you want to pass to the `Pipeline` during execution,
including different values of the same parameter for different `Tasks` in the `Pipeline`.

**Note:** You must specify all the `Parameters` that the `Pipeline` expects. Parameters
**Note:** You must specify all the `Parameters` that the `Pipeline` expects. Parameters
that have default values specified in Pipeline are not required to be provided by PipelineRun.

For example:
Expand All @@ -236,14 +241,14 @@ spec:
- name: pl-param-y
value: "500"
```
You can pass in extra `Parameters` if needed depending on your use cases. An example use
case is when your CI system autogenerates `PipelineRuns` and it has `Parameters` it wants to
provide to all `PipelineRuns`. Because you can pass in extra `Parameters`, you don't have to
You can pass in extra `Parameters` if needed depending on your use cases. An example use
case is when your CI system autogenerates `PipelineRuns` and it has `Parameters` it wants to
provide to all `PipelineRuns`. Because you can pass in extra `Parameters`, you don't have to
go through the complexity of checking each `Pipeline` and providing only the required params.

### Specifying custom `ServiceAccount` credentials

You can execute the `Pipeline` in your `PipelineRun` with a specific set of credentials by
You can execute the `Pipeline` in your `PipelineRun` with a specific set of credentials by
specifying a `ServiceAccount` object name in the `serviceAccountName` field in your `PipelineRun`
definition. If you do not explicitly specify this, the `TaskRuns` created by your `PipelineRun`
will execute with the credentials specified in the `configmap-defaults` `ConfigMap`. If this
Expand All @@ -256,7 +261,7 @@ For more information, see [`ServiceAccount`](auth.md).

If you require more granularity in specifying execution credentials, use the `serviceAccountNames` field to
map a specific `serviceAccountName` value to a specific `Task` in the `Pipeline`. This overrides the global
`serviceAccountName` you may have set for the `Pipeline` as described in the previous section.
`serviceAccountName` you may have set for the `Pipeline` as described in the previous section.

For example, if you specify these mappings:

Expand Down Expand Up @@ -358,7 +363,7 @@ spec:
disktype: ssd
```

If used with this `Pipeline`, `build-task` will use the task specific `PodTemplate` (where `nodeSelector` has `disktype` equal to `ssd`).
If used with this `Pipeline`, `build-task` will use the task specific `PodTemplate` (where `nodeSelector` has `disktype` equal to `ssd`).

### Specifying `Workspaces`

Expand Down Expand Up @@ -474,7 +479,7 @@ When a `PipelineRun` changes status, [events](events.md#pipelineruns) are trigge

When a `PipelineRun` has `Tasks` with [WhenExpressions](pipelines.md#guard-task-execution-using-whenexpressions):
- If the `WhenExpressions` evaluate to `true`, the `Task` is executed then the `TaskRun` and its resolved `WhenExpressions` will be listed in the `Task Runs` section of the `status` of the `PipelineRun`.
- If the `WhenExpressions` evaluate to `false`, the `Task` is skipped then its name and its resolved `WhenExpressions` will be listed in the `Skipped Tasks` section of the `status` of the `PipelineRun`.
- If the `WhenExpressions` evaluate to `false`, the `Task` is skipped then its name and its resolved `WhenExpressions` will be listed in the `Skipped Tasks` section of the `status` of the `PipelineRun`.

```yaml
Conditions:
Expand Down
41 changes: 22 additions & 19 deletions docs/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ A `Pipeline` definition supports the following fields:
- [`metadata`][kubernetes-overview] - Specifies metadata that uniquely identifies the
`Pipeline` object. For example, a `name`.
- [`spec`][kubernetes-overview] - Specifies the configuration information for
this `Pipeline` object. This must include:
this `Pipeline` object. This must include:
- [`tasks`](#adding-tasks-to-the-pipeline) - Specifies the `Tasks` that comprise the `Pipeline`
and the details of their execution.
- Optional:
Expand All @@ -61,7 +61,7 @@ A `Pipeline` definition supports the following fields:
execution of a `Task` after a failure. Does not apply to execution cancellations.
- [`conditions`](#guard-task-execution-using-conditions) - Specifies `Conditions` that only allow a `Task`
to execute if they successfully evaluate.
- [`timeout`](#configuring-the-failure-timeout) - Specifies the timeout before a `Task` fails.
- [`timeout`](#configuring-the-failure-timeout) - Specifies the timeout before a `Task` fails.
- [`results`](#configuring-execution-results-at-the-pipeline-level) - Specifies the location to which
the `Pipeline` emits its execution results.
- [`description`](#adding-a-description) - Holds an informative description of the `Pipeline` object.
Expand Down Expand Up @@ -136,7 +136,7 @@ varies throughout its execution. If no value is specified, the `type` field defa
When the actual parameter value is supplied, its parsed type is validated against the `type` field.
The `description` and `default` fields for a `Parameter` are optional.

The following example illustrates the use of `Parameters` in a `Pipeline`.
The following example illustrates the use of `Parameters` in a `Pipeline`.

The following `Pipeline` declares an input parameter called `context` and passes its
value to the `Task` to set the value of the `pathToContext` parameter within the `Task`.
Expand Down Expand Up @@ -231,6 +231,9 @@ spec:

### Tekton Bundles

**Note: This is only allowed if `enable-tekton-oci-bundles` is set to
`"true"` in the `feature-flags` configmap, see [`install.md`](./install.md#customizing-the-pipelines-controller-behavior)**

You may also specify your `Task` reference using a `Tekton Bundle`. A `Tekton Bundle` is an OCI artifact that
contains Tekton resources like `Tasks` which can be referenced within a `taskRef`.

Expand All @@ -244,7 +247,7 @@ contains Tekton resources like `Tasks` which can be referenced within a `taskRef
```

Here, the `bundle` field is the full reference url to the artifact. The name is the
`metadata.name` field of the `Task`.
`metadata.name` field of the `Task`.

You may also specify a `tag` as you would with a Docker image which will give you a fixed,
repeatable reference to a `Task`.
Expand All @@ -263,7 +266,7 @@ You may also specify a fixed digest instead of a tag.
```yaml
spec:
tasks:
- name: hello-world
- name: hello-world
taskRef:
name: echo-task
bundle: docker.io/myrepo/mycatalog@sha256:abc123
Expand All @@ -282,14 +285,14 @@ so long as the artifact adheres to the [contract](tekton-bundle-contracts.md).

If a `Task` in your `Pipeline` needs to use the output of a previous `Task`
as its input, use the optional `from` parameter to specify a list of `Tasks`
that must execute **before** the `Task` that requires their outputs as its
input. When your target `Task` executes, only the version of the desired
that must execute **before** the `Task` that requires their outputs as its
input. When your target `Task` executes, only the version of the desired
`PipelineResource` produced by the last `Task` in this list is used. The
`name` of this output `PipelineResource` output must match the `name` of the
input `PipelineResource` specified in the `Task` that ingests it.
input `PipelineResource` specified in the `Task` that ingests it.

In the example below, the `deploy-app` `Task` ingests the output of the `build-app`
`Task` named `my-image` as its input. Therefore, the `build-app` `Task` will
`Task` named `my-image` as its input. Therefore, the `build-app` `Task` will
execute before the `deploy-app` `Task` regardless of the order in which those
`Tasks` are declared in the `Pipeline`.

Expand Down Expand Up @@ -377,11 +380,11 @@ The components of `WhenExpressions` are `Input`, `Operator` and `Values`:
- `Operator` represents an `Input`'s relationship to a set of `Values`. A valid `Operator` must be provided, which can be either `in` or `notin`.
- `Values` is an array of string values. The `Values` array must be provided and be non-empty. It can contain static values or variables ([`Parameters`](#specifying-parameters), [`Results`](#using-results) or [a Workspaces's `bound` state](#specifying-workspaces)).

The [`Parameters`](#specifying-parameters) are read from the `Pipeline` and [`Results`](#using-results) are read directly from previous [`Tasks`](#adding-tasks-to-the-pipeline). Using [`Results`](#using-results) in a `WhenExpression` in a guarded `Task` introduces a resource dependency on the previous `Task` that produced the `Result`.
The [`Parameters`](#specifying-parameters) are read from the `Pipeline` and [`Results`](#using-results) are read directly from previous [`Tasks`](#adding-tasks-to-the-pipeline). Using [`Results`](#using-results) in a `WhenExpression` in a guarded `Task` introduces a resource dependency on the previous `Task` that produced the `Result`.

The declared `WhenExpressions` are evaluated before the `Task` is run. If all the `WhenExpressions` evaluate to `True`, the `Task` is run. If any of the `WhenExpressions` evaluate to `False`, the `Task` is not run and the `Task` is listed in the [`Skipped Tasks` section of the `PipelineRunStatus`](pipelineruns.md#monitoring-execution-status).
The declared `WhenExpressions` are evaluated before the `Task` is run. If all the `WhenExpressions` evaluate to `True`, the `Task` is run. If any of the `WhenExpressions` evaluate to `False`, the `Task` is not run and the `Task` is listed in the [`Skipped Tasks` section of the `PipelineRunStatus`](pipelineruns.md#monitoring-execution-status).

In these examples, `first-create-file` task will only be executed if the `path` parameter is `README.md`, `echo-file-exists` task will only be executed if the `exists` result from `check-file` task is `yes` and `run-lint` task will only be executed if the `lint-config` optional workspace has been provided by a PipelineRun.
In these examples, `first-create-file` task will only be executed if the `path` parameter is `README.md`, `echo-file-exists` task will only be executed if the `exists` result from `check-file` task is `yes` and `run-lint` task will only be executed if the `lint-config` optional workspace has been provided by a PipelineRun.

```yaml
tasks:
Expand Down Expand Up @@ -426,7 +429,7 @@ There are a lot of scenarios where `WhenExpressions` can be really useful. Some

### Guard `Task` execution using `Conditions`

**Note:** `Conditions` are deprecated, use [`WhenExpressions`](#guard-task-execution-using-whenexpressions) instead.
**Note:** `Conditions` are deprecated, use [`WhenExpressions`](#guard-task-execution-using-whenexpressions) instead.

To run a `Task` only when certain conditions are met, it is possible to _guard_ task execution using
the `conditions` field. The `conditions` field allows you to list a series of references to
Expand All @@ -450,17 +453,17 @@ tasks:
name: deploy
```

Unlike regular task failures, condition failures do not automatically fail the entire `PipelineRun` --
Unlike regular task failures, condition failures do not automatically fail the entire `PipelineRun` --
other tasks that are **not dependent** on the `Task` (via `from` or `runAfter`) are still run.

In this example, `(task C)` has a `condition` set to _guard_ its execution. If the condition
is **not** successfully evaluated, task `(task D)` will not be run, but all other tasks in the pipeline
that not depend on `(task C)` will be executed and the `PipelineRun` will successfully complete.

```
(task B) β€” (task E)
/
(task A)
/
(task A)
\
(guarded task C) β€” (task D)
```
Expand Down Expand Up @@ -495,7 +498,7 @@ tasks:
You can use the `Timeout` field in the `Task` spec within the `Pipeline` to set the timeout
of the `TaskRun` that executes that `Task` within the `PipelineRun` that executes your `Pipeline.`
The `Timeout` value is a `duration` conforming to Go's [`ParseDuration`](https://golang.org/pkg/time/#ParseDuration)
format. For example, valid values are `1h30m`, `1h`, `1m`, and `60s`.
format. For example, valid values are `1h30m`, `1h`, `1m`, and `60s`.

**Note:** If you do not specify a `Timeout` value, Tekton instead honors the timeout for the [`PipelineRun`](pipelineruns.md#configuring-a-pipelinerun).

Expand Down Expand Up @@ -528,7 +531,7 @@ a `Result` and another receives it as a `Parameter` with a variable such as
When one `Task` receives the `Results` of another, there is a dependency created between those
two `Tasks`. In order for the receiving `Task` to get data from another `Task's` `Result`,
the `Task` producing the `Result` must run first. Tekton enforces this `Task` ordering
by ensuring that the `Task` emitting the `Result` executes before any `Task` that uses it.
by ensuring that the `Task` emitting the `Result` executes before any `Task` that uses it.

In the snippet below, a param is provided its value from the `commit` `Result` emitted by the
`checkout-source` `Task`. Tekton will make sure that the `checkout-source` `Task` runs
Expand Down
Loading

0 comments on commit bf3749d

Please sign in to comment.