Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added s3 bucket configuration doc/example #1533

Merged
merged 1 commit into from
Nov 8, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 4 additions & 10 deletions docs/developers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,10 @@ on path `/pvc` by PipelineRun.
adds a step to copy from PVC directory path:
`/pvc/previous_task/resource_name`.

Another alternatives is to use a GCS storage bucket to share the artifacts. This
can be configured using a ConfigMap with the name `config-artifact-bucket` with
the following attributes:

- location: the address of the bucket (for example gs://mybucket)
- bucket.service.account.secret.name: the name of the secret that will contain
the credentials for the service account with access to the bucket
- bucket.service.account.secret.key: the key in the secret with the required
service account json. The bucket is recommended to be configured with a
retention policy after which files will be deleted.
Another alternatives is to use a GCS storage or S3 bucket to share the artifacts.
This can be configured using a ConfigMap with the name `config-artifact-bucket`.

See [here](../install.md#how-are-resources-shared-between-tasks) for configuration details.

Both options provide the same functionality to the pipeline. The choice is based
on the infrastructure used, for example in some Kubernetes platforms, the
Expand Down
43 changes: 39 additions & 4 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,8 @@ for more information_
### How are resources shared between tasks

Pipelines need a way to share resources between tasks. The alternatives are a
[Persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
[Persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/),
an [S3 Bucket](https://aws.amazon.com/s3/)
or a [GCS storage bucket](https://cloud.google.com/storage/)

The PVC option can be configured using a ConfigMap with the name
Expand All @@ -127,11 +128,11 @@ The PVC option can be configured using a ConfigMap with the name
- `size`: the size of the volume (5Gi by default)
- `storageClassName`: the [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/) of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.

The GCS storage bucket can be configured using a ConfigMap with the name
The GCS storage bucket or the S3 bucket can be configured using a ConfigMap with the name
`config-artifact-bucket` with the following attributes:

- `location`: the address of the bucket (for example gs://mybucket)
- bucket.service.account.secret.name: the name of the secret that will contain
- `location`: the address of the bucket (for example gs://mybucket or s3://mybucket)
- `bucket.service.account.secret.name`: the name of the secret that will contain
the credentials for the service account with access to the bucket
- `bucket.service.account.secret.key`: the key in the secret with the required
service account json.
Expand All @@ -140,6 +141,40 @@ The GCS storage bucket can be configured using a ConfigMap with the name
- `bucket.service.account.field.name`: the name of the environment variable to use when specifying the
secret path. Defaults to `GOOGLE_APPLICATION_CREDENTIALS`. Set to `BOTO_CONFIG` if using S3 instead of GCS.

*Note:* When using an S3 bucket, there is a restriction that the bucket is located in the us-east-1 region.
This is a limitation coming from using [gsutil](https://cloud.google.com/storage/docs/gsutil) with a boto configuration
behind the scene to access the S3 bucket.

An typical configuration to use an S3 bucket is available below :

```yaml
apiVersion: v1
kind: Secret
metadata:
name: tekton-storage
type: kubernetes.io/opaque
stringData:
boto-config: |
[Credentials]
aws_access_key_id = AWS_ACCESS_KEY_ID
aws_secret_access_key = AWS_SECRET_ACCESS_KEY
[s3]
host = s3.us-east-1.amazonaws.com
[Boto]
https_validate_certificates = True
---
apiVersion: v1
data: null
kind: ConfigMap
metadata:
name: config-artifact-pvc
data:
location: s3://mybucket
bucket.service.account.secret.name: tekton-storage
bucket.service.account.secret.key: boto-config
bucket.service.account.field.name: BOTO_CONFIG
```

Both options provide the same functionality to the pipeline. The choice is based
on the infrastructure used, for example in some Kubernetes platforms, the
creation of a persistent volume could be slower than uploading/downloading files
Expand Down