-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EP] Remote Artifacts #419
[EP] Remote Artifacts #419
Conversation
I like the idea, especially for maybe large external assets you do not want or cannot have in a Git repository. |
docs/proposals/remote-artifacts.md
Outdated
artifacts: | ||
- name: main-jar | ||
url: https://nexus.company.com/app/main.jar | ||
path: $(workspace)/classpath/main.jar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we maybe consider extending the source section instead ?
apiVersion: build.dev/v1alpha1
kind: Build
metadata:
name: nexus-jar
spec:
sources
# A source of type=git must only exist once or be absent
- type: git
url: [email protected]:SCHWARZS/test-project.git
revision: sascha-test
credentials:
name: ibm-github
# A source of type=http can exist multiple times
- type: http
url: https://nexus.company.com/app/main.jar
path: classpath/main.jar # Not sure if I would allow values that do not start with ${workspace} here but instead always have that as prefix
# Credentials support for whatever we want, start with basic auth only, client-cert maybe in the future, we will need to determine it based on how the secret looks
- type: http
url: https://restricted.server.com/lib.jar
credentials:
name: restricted-server
path: classpath/lib.jar
# A source of type=http can have an optional extract flag set to true which means that path is interpreted as a directory instead of a file and the file content is extracted (TBD: can we autodetect the file type or do we need to make extract an enum or add another compressiontype parameter?)
- type: http
url: https://www.myserver.com/sources.zip
extract: true
path: sources
This would be an extensible mechanism. In the future we might support other types (loading from maybe some configmap where the cli placed the local sources).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can introduce glue-code to autodetect the type based on the url, keeping the whole spec.source
as simple as it is now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that would be hard. An HTTPS URL can be a public Git repository (especially as we do not require the .git
suffix at the moment), a SubVersion repository, a plain file and probably more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think given the distinction for types of data, one being VCS and the other remote artifacts, I think they should be described in different logical blocks. So, we also have a clear separation between those two concerts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@otaviof a VCS resource is also remote. There is "just" a different protocol to download them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be an extensible mechanism. In the future we might support other types (loading from maybe some configmap where the cli placed the local sources).
ConfigMap
is certainly a valid use case, though with a cli upload I see PersistentVolume/PVC being a more likely result.
Another thing we should clarify is that "sources" are things that are intended to be present in the resulting image. There is a separate use case for "volume" support (like caches), in which case the data present in the volume is not included in the resulting container image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... spec: sources # A source of type=git must only exist once or be absent - type: git url: [email protected]:SCHWARZS/test-project.git revision: sascha-test credentials: name: ibm-github
Wouldn't we want to employ the discriminator pattern?
spec:
sources:
- name: git
type: Git
git:
url: ...
...
(note that type: Git
may not be required, apimachinery can fill it in).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same line of thought, I see a VCS as yet another remote artifact. I´m not able to understand the distinction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please consider latest commit, I'm adding a initial .spec.sources
support with your inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@otaviof thanks, I took a look.
http: settings for http, namely path (optional);
Is the new field from above required? we already have something call contextDir
, can this serve the same purpose?
@adambkaplan @SaschaSchwarze0 @otaviof I have some concerns around spec.sources
. First of all I think introducing a type
is a nice way to distinguish the different assets(remote files, git,etc). I´m struggling to understand the value of moving from of a map(spec.source) into a list(spec.sources). My rational is:
- This proposal does not explains the use cases on why a list is needed
- This will break the current API
spec.source
is for me one of the structs that is very simple to understand, and I would like to keep it this way. Adding support for a list of sources is going to add complexity on that API level.- supporting
spec.sources
will introduce some code complexity when generating the Taskrun. It would be good to highlight those and decide if it worth the effort.
@sbose78 probably we want your feedback also. Also, do we have some good references on how Build v1 did it(shouldnt bias us)?, but it will help to understand the rational there.
checking soon..(just in case someone was wondering) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some comments, I think I have doubts on:
- concepts you define here. Mainly referring to "remote artifacts support", because we already do this( git repos)
- Seems is not clear how to implement this, either via a new ClusterBuildStrategy or an extension of
spec.source
, I will prefer the second one. - The retrieval of the external artifact proposes a "wget" binary, I think we should propose a golang implementation of http get or reuse something from tekton
- Is not clear how would an "artifact" be able to run. There should be a dependency call "base image for the artifact support", this is not mentioned here.
I love the idea, this is a great future, but still a lot of details to properly define.
docs/proposals/remote-artifacts.md
Outdated
artifacts: | ||
- name: main-jar | ||
url: https://nexus.company.com/app/main.jar | ||
path: $(workspace)/classpath/main.jar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can introduce glue-code to autodetect the type based on the url, keeping the whole spec.source
as simple as it is now.
happen before the build process starts. To achieve the objective, the operator will generate a new | ||
task step to download the artifacts. | ||
|
||
The download may happen using existing open-source software, more specifically `wget`. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "download" looks critical. We get this for free from tekton, Wouldn´t it make more sense to do this natively with "golang" or to see if Tekton have already something for us? e.g. they do not use a "git" binary for pulling, they have their own git pkg
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand what Tekton does for the VCS (source-code), but what would it do for remote artifacts? Could we use Tekton to download them as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally there would be a Tekton resource handling things like plain HTTP(S) download. But, I think there is not one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not appear such a resource type exists. If we accept this proposal, I recommend submitting a TEP to see if we can get upstream Tekton buy-in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was mainly referring to do a native implementation in go, I would like to avoid introducing extra binaries dependencies in the code. We might not need Tekton at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we all agree this is not enforcing the usage of wget
and that another more native implementation(e.g. tekton way) might be reused. Can we mention this somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we leverage the Tekton source to enable storage or something first?:
https://github.com/tektoncd/pipeline/blob/master/docs/resources.md#resource-types
I think wget
is a little weak. How about the private resource (with auth) or ftp?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PipelineResources
at this time are most likely not going to get out of alpha stage. There has been a fair amount of consternation and churn around how exactly how to move forward, though.
But minimally I would not advise a PipelineResource
styled path.
That said, if I had to pick, I'd say "custom tasks" might be the most likely path along the "tekton resource" angle. Also, submitting something "similar" but different most likely would get the response of "look at custom tasks".
See https://github.com/tektoncd/community/blob/master/teps/0002-custom-tasks.md for details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is also a "specializing task" which has some literature around it but no TEP as of yet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, a specific resource in upstream Tekton would be a hard sell at the moment given the direction PipelineResources
is taking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, I like the idea of moving from one source
to allowing multiple sources
, where git
and http
(the remote artifact) are one of many types of sources.
However, in allowing this complexity we need to keep the simple cases simple. We can't require a significant amount of extra configuration for a user to create a simple build from source using kaniko, s2i, buildpacks, etc. Example - I would expect a build as follows to "just work":
apiVersion: build.dev/v1alpha1
kind: Build
metadata:
name: simple
spec:
sources:
- name: src
type: Git
git:
url: https://github.com/shipwright-io/build.git
...
docs/proposals/remote-artifacts.md
Outdated
artifacts: | ||
- name: main-jar | ||
url: https://nexus.company.com/app/main.jar | ||
path: $(workspace)/classpath/main.jar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be an extensible mechanism. In the future we might support other types (loading from maybe some configmap where the cli placed the local sources).
ConfigMap
is certainly a valid use case, though with a cli upload I see PersistentVolume/PVC being a more likely result.
Another thing we should clarify is that "sources" are things that are intended to be present in the resulting image. There is a separate use case for "volume" support (like caches), in which case the data present in the volume is not included in the resulting container image.
happen before the build process starts. To achieve the objective, the operator will generate a new | ||
task step to download the artifacts. | ||
|
||
The download may happen using existing open-source software, more specifically `wget`. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not appear such a resource type exists. If we accept this proposal, I recommend submitting a TEP to see if we can get upstream Tekton buy-in.
@otaviof sorry for my late reply, I drop you some updates on my initial comments. |
- rewording to clarity what remote artifact means; - using `sources` and expanding the input source concept; - updating all examples;
I have two questions about the type of source.
to mention the git again? I think we can simplify to use url directly:
For http, I think it is similar:
It is already the http type, but still have another
|
The `source` section will be renamed to `sources` in order to accommodate more `type`s of inputs. | ||
Each entry will be composed of the following attributes: | ||
|
||
- `name`: source name (required); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like name is superfluous here, what is the point of having it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so we can use the merge
patch strategy. Otherwise kubectl apply
will default to overwriting everything in the sources
array.
docs/proposals/remote-artifacts.md
Outdated
- `sourceRef`: use a external resource, the name in this field is the resource name (optional); | ||
- `url`: the URL of the repository or remote artifact (optional); | ||
- `credentials`: the credentials to interact with the artifact (optional); | ||
- `http`: settings for `http`, namely `path` (optional); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is path
for here? Is this the local path that the file will be downloaded to? Why is it only available for the "http" type? Seems like path
should be a top level element.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've amended to explain .path
is a inner attribute, please consider latest changes.
docs/proposals/remote-artifacts.md
Outdated
- `url`: the URL of the repository or remote artifact (optional); | ||
- `credentials`: the credentials to interact with the artifact (optional); | ||
- `http`: settings for `http`, namely `path` (optional); | ||
- `git`: settings for `git`, namely `revision` (optional); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we eliminate revision
and just use standard git paths that reference a revision in the url
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm rewording this to mention it stays as original.
|
||
- `name`: source name (required); | ||
- `type`: input source type, `git` or `http` (required); | ||
- `sourceRef`: use a external resource, the name in this field is the resource name (optional); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you expand upon this fields usage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm moving the field to the new "Alternatives" section at the end.
happen before the build process starts. To achieve the objective, the operator will generate a new | ||
task step to download the artifacts. | ||
|
||
The download may happen using existing open-source software, more specifically `wget`. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to see us use pure golang for the downloading instead of an external tool such as wget, i don't like having to rely on os provided tools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like idea from Corey. We will eventually face a Build which has a longer list of sources. For all these sources beside the git source (which we will probably want to continue to handle using Tekton's Git resource), I envision one Task step implemented by ourselves using Golang. Within our code we can then do optimizations like parallel downloads for different sources and also can handle and report errors in a better way than by invoking an external command.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also like the idea of creating or own tool for that purpose. However, I see wget
as a simple starting point, on which we could potentially develop our own tool in parallel.
So, how about get started with wget
and evaluate the new tooling on its own track?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I raised the same concern on some comments ago, I do not see why we need to rely on a binary. #419 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can agree that we can implement the downloader, regardless of whether we build our own in go vs. "buy" it (by using a fixed container image with wget
). IMO deciding which approach to take is beyond what is necessary to mark this EP as "implementable".
In order to avoid completely breaking existing API usage with |
docs/proposals/remote-artifacts.md
Outdated
container image which contains this software will be specified via environment variable, and when | ||
not informed will use a default value. | ||
|
||
Therefore, the operator needs to generate `wget` commands with arguments in order to download the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the operator needs to generate
wget
commands with arguments in order to download the
The buildrun controller needs to generate a TaskRun step that would generate the wget
command to download the artifacts specified
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, that's a good addition. Please consider latest commit 01ace39
.
sources: | ||
- name: project-logo | ||
type: http | ||
url: https://user-images.githubusercontent.com/2587818/92114986-69bfb600-edfa-11ea-820e-96cdb1014f58.png |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you use a more realistic example like a jar artifact in all examples, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as in here.
docs/proposals/remote-artifacts.md
Outdated
spec: | ||
sources: | ||
- type: http | ||
url: https://licenses.company.com/customer-id/license.tar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you use a more realistic example like a jar artifact in all examples, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as in here.
sources: | ||
- name: license-file | ||
type: http | ||
url: https://licenses.company.com/customer-id/license.tar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you use a more realistic example like a jar artifact in all examples, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The initial text was using a Jar file, but it wind up in questions about the dependency management itself. So, moved that to a simpler thing, a license file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please note a more Java like example is kept here.
metadata: | ||
name: license-file | ||
spec: | ||
sources: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering if sources
really makes sense here. I think of source as a git repository or something to that affect. Maybe this should be something more along the lines of dependencies
like you would list in a maven file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My vote is for inputs
which would be a symmetric naming together with the output
image that we already have in the Build.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SaschaSchwarze0 are you okay with sources
and the path ahead to unify them later on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SaschaSchwarze0 are you okay with
sources
and the path ahead to unify them later on?
I am okay with sources
, yes. (But I doubt we would ever change this in the future. ;-) )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I´m concerned more than one person is not fully buying the spec.sources
, what are the implications for this EP to move into spec.inputs
rather than spec.sources
, it will also keep the API clean for now. And in another EP we will deal with multiple gits, where we can tackle that modifications to spec.source
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm okay to use .spec.inputs
for this purpose. Do we have objections on .spec.inputs
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not from my side :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sbose78, @adambkaplan, @gabemontero, @coreydaley would you object to use .spect.inputs
instead?
If additional large files is the case, wouldn't we support whatever system their source code uses to manage dependencies such as maven, bundler, go mod, etc? As far as other types of large assets go, I think that they would be using something like a third party object store such as S3 for images, videos, and other binary downloads. |
I'm pushing changes mentioning |
- keeping `source` in place, and addding `source` with new remote artifacts; - avoiding dealing with multiple Git repositories on this EP; - updating list of reviewers;
2d1c2ae
to
a112672
Compare
docs/proposals/remote-artifacts.md
Outdated
```yml | ||
--- | ||
apiVersion: build.dev/v1alpha1 | ||
kind: BuildSource | ||
metadata: | ||
name: license-file | ||
spec: | ||
sources: | ||
- name: license.tar | ||
type: http | ||
url: https://licenses.company.com/customer-id/license.tar | ||
http: | ||
path: $(workspace)/licenses/license.tar | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A question around the BuildSource
. Within the Build
's spec.sources
, it is used to from one source entry to reference an external definition. Based on that, may expectation would be that the external build source would only contain a single spec like this:
---
apiVersion: build.dev/v1alpha1
kind: BuildSource
metadata:
name: license-file
spec:
type: http
url: https://licenses.company.com/customer-id/license.tar
http:
path: $(workspace)/licenses/license.tar
Should we have multiple entries with their own name in the BuildSource
, then we'll have to define how this is merged with the other entries in the Build
's sources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think here we may benefit from a slice of entries. Given that we may be downloading a number of dependencies from a single external source, and we can bundle together in this slice fashion. It also keeps aligned with Build
resource where it will be available as a array/slice as well.
Volumes feature has been captured in #478. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@otaviof a few requested changes from my latest pass at the draft.
We've had a lot of lengthy conversations on this EP. I'd like us to turn to the core decision - whether or not we agree the design as laid out in this proposal is implementable.
The `source` section will be renamed to `sources` in order to accommodate more `type`s of inputs. | ||
Each entry will be composed of the following attributes: | ||
|
||
- `name`: source name (required); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is so we can use the merge
patch strategy. Otherwise kubectl apply
will default to overwriting everything in the sources
array.
docs/proposals/remote-artifacts.md
Outdated
|
||
We will address supporting multipe Git repositories in a future enhancement proposal. This will involve re-defining `/workspace/source` location and the `$workspace` placeholder. | ||
|
||
### Standalone CRD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be placed in an "Alternatives" section at the bottom of the proposal. I believe we decided that we are not going to pursue this approach.
Leaving this in the main body confuses what is in the proposal vs. out. See the proposal template for guidance:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following the template, the standalone CRD alternative design was moved into a section at the end.
happen before the build process starts. To achieve the objective, the operator will generate a new | ||
task step to download the artifacts. | ||
|
||
The download may happen using existing open-source software, more specifically `wget`. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can agree that we can implement the downloader, regardless of whether we build our own in go vs. "buy" it (by using a fixed container image with wget
). IMO deciding which approach to take is beyond what is necessary to mark this EP as "implementable".
```yml | ||
--- | ||
apiVersion: build.dev/v1alpha1 | ||
kind: BuildSource |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This references the original CRD style of declaring a downloaded resource, which IIRC we are not pursuing. Please update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved to the new "Alternatives" section, please consider.
- Moving `sourceRef` to alternatives section; - Adding "Alternatives" section, and moving standalong CRD into; - Minor changes and formatting;
|
||
Then, we can create the `Build` resource, as per: | ||
|
||
```yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@otaviof pls correct me if I´m wrong, but here the example is based on the assumption that we support a new BuildSource
CRD. If we do not have the BuildSource
CRD in place, then we will still need to have
- A Build instance to define the git source code
- Two BuildRuns, each one overriding the
spec.sources
for each logo type
is the above correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, correct @qu1queee. The example was made to illustrate both scenarios at once, but it might have been too much leaning towards the "Alternative" scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, thanks
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qu1queee The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no further comments on this EP; therefore approving, thanks @otaviof for addressing each of my questions
Enhancement proposal describing the concept of Remote Artifacts, and how this new concept will be expressed in Build operator. Related to #97.