Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Environment Variable for artifactsLocationTemplate() #10

Closed
johncmckim opened this issue Jul 19, 2015 · 15 comments
Closed

Environment Variable for artifactsLocationTemplate() #10

johncmckim opened this issue Jul 19, 2015 · 15 comments

Comments

@johncmckim
Copy link
Contributor

Currently all artifacts are uploaded to a bucket with the prefix pipeline/stage/job/pipeline_counter.stage_counter.

We would like to be able to upload artifacts to a bucket with a different prefix specified in a pipeline parameter i.e. s3://bucket-name/application-name/client-name/version/.

Can GoEnvironment.java be changed to first check if a specific pipeline parameter exists and use that in the place of the standard template?

@brewkode
Copy link
Contributor

@johncmckim We need to take care of how getLatestRevision would work too. Right now, getLatestRevision is based on this convention - both in terms of the dir structure and the way the version is maintained - pipeline_counter.stage_counter. So, any changes to the path will affect the fetch-s3-plugin too.

OTOH, if you want to use publish s3 plugin for just artifact storage, which wont be used for dependency management via fetch-s3 plugin, then it might work. Thoughts?

@johncmckim
Copy link
Contributor Author

In our use case, we are only interested in the publish component. Our goal is to publish deployable packages to S3 that is then distributed and installed via our internal systems. The current structure does not work well for this.

However, I can see the issue for those using for both publish and fetch. If I was using this purely for artifact storage I would not to change it away from the default.

Considering this, perhaps another option is the setting this on the Publish task configuration. An option to specify your own prefix (i.e. / or /application-name ) would give the user the greatest control over the destination of the artifacts.

@brewkode
Copy link
Contributor

Exactly. The intention was to use them in tandem. I've had (many) situations where I wanted to publish to a specific prefix in S3. The right solution for that would be to keep it as task level configuration(which would override default settings). It gives best of both worlds.

@manojlds @ashwanthkumar Thoughts?

@ashwanthkumar
Copy link
Member

+1

@manojlds
Copy link
Member

+1 something we should have.

@johncmckim
Copy link
Contributor Author

Thanks everyone. This is something that we are going to need in the near term. However, I understand our timeline might not fit in with yours.

If you think this might be a while away from being implemented, I am happy to contribute with a pull request.

@ashwanthkumar
Copy link
Member

It would be great if you could send in a PR, we'll be happy to review and merge the changes.

@johncmckim
Copy link
Contributor Author

Awesome. I'll put something together soon. Looks like a simple change.

@kitplummer
Copy link
Contributor

Trying to make sure I understand the context here, before I submit another GH issue. Is the objective here to be able to create a 'static' location/bucket in S3 as the destination...to avoid the 'build_number' bucket? This is what I need...so I can deploy artifacts to package repositories (e.g. Yum et al).

@johncmckim
Copy link
Contributor Author

@kitplummer that's the idea. This pull request add's a prefix field that can be used to replace the default prefix, i.e. normally artifacts are uploaded to pipeline/stage/job/pipelineCounter.stageCounter, but you can change that to something different (supports parameters and environment variables). I've been using my fork for ages now and it's working well for me.

@kitplummer
Copy link
Contributor

@johncmckim - thanks for the clarification. @ashwanthkumar @manojlds - any ideas when the PR will/could be merged in to the project?

@timanrebel
Copy link

@johncmckim is it possible to use the root of an S3 bucket as the destination? When I enter / I get a root folder with an empty name.

@johncmckim
Copy link
Contributor Author

@timanrebel I can see that happening too. I'm guessing here, but I think the issue is that a / is appended to the destination key before being sent to Amazon. So when you input / as the destination prefix, you end up with a key that is //, causing the empty folder. It should be easy to fix. I'll set a reminder to have a look at it soon.

@johncmckim
Copy link
Contributor Author

@timanrebel I have resolved this issue and updated the pull request. Apparently keys that start with / cause this.

@ashwanthkumar @manojlds any idea when this could be looked at?

@ashwanthkumar
Copy link
Member

Fixed via #11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants