-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Skip uploading artifacts and crash reports to paid services if API key/secret env vars aren't set #66
Conversation
@aminya please take a look. If you like this, or have any suggestions for improvement, this PR can give us green CI on the "Release Branch Build" and "Nightly Release" pipelines. Since this is one of those things that benefits forks in general, we might consider how best to format and code this for posting as a PR to upstream. |
The reason I made the slightly funky "skip" option for S3 is that if upstream merges this, minus the commits that turn it off, it won't disrupt their release flow. If we can convince them to change their CI config, then we have more options for how to format this. Particularly, you can't check if a secret variable in Azure Pipelines is empty very easily, as the variable tends to expand to its full name, literally a string containing That's why I made this on by default in the JS, but you can disable it by setting a flag in the pipeline |
fb922f2
to
5440888
Compare
Updated. It occurs to me these need not be flags. We can simply check an environment variable (example names: That would make the change for this PR exclusively live in @aminya Would you approve of this approach? (The variable names |
5440888
to
47fe493
Compare
d145d8e
to
f2b6f8d
Compare
The Edit: Hmm, it doesn't even try to run the upload-artifacts script on most branches. Have to try this on:
Edit 2: Testing these scenarios on my fork. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hoping to squash the changes to something more reasonable before merging.
e68ddc2
to
b05628d
Compare
This comment has been minimized.
This comment has been minimized.
I made an upstream issue to try to get a discussion there, but it;s only been a couple of days, so they haven't had much time to respond... Still, I think they have other things on their plates at the moment keeping them busy. +1 to move forward on this here. I might want to check for some edge cases or additional scripts I missed, but honestly I found there are a lot of hard-coded URLs to the |
I came up with a different approach you might want to take a look at: master...DeeDeeG:WIPPPPP Just skip if S3 credential env vars aren't set. And then we can disable all the S3 and Linux (PackageCloud.io) uploads in yml and JS. I'm unclear of upstream wants these toggles, and they're honestly a bit tricky to get right, so we could bluntly disable them here and just revert to what upstream has some time in the future if we want S3 uploads again. |
Here's something I realized we can do, a fully realized version of the WIPPPPP branch I posted above. If there is an unset Azure DevOps variable passed to a step with So basically we can just do a sanity check that we have credentials to authenticate to these services, otherwise skip uploading to them. That's a way to do this without touching the YAML, and it should be reasonable enough for upstream too, but even if upstream won't take this, it works for us/forks without having to set any new environment variables. No new documentation for a new variable needed, less mental load for forks setting up their CIs, more automatic. At this point I am happy with how this looks. If CI passes I want to overwrite this PR with that approach, change the PR title to "Skip uploading to paid services if credential env vars not set" or something, and we can merge it. Update: Working as intended, other than some errors because we have no existing Nightly releases posted: https://dev.azure.com/DeeDeeG/b/_build/results?buildId=267&view=logs&j=8d802004-fbbb-5f17-b73e-f23de0c1dec8&t=18812538-7d35-526e-8e7b-36b6ab8ed5eb&l=15 Particularly this bit: Environment variables "ATOM_RELEASES_S3_KEY" and/or "ATOM_RELEASES_S3_SECRET" are not set, skipping S3 upload.
Environment variable "PACKAGE_CLOUD_API_KEY" is not set, skipping PackageCloud upload. That's the Nightly pipeline, see also this run of the Release Branch pipeline, working as intended: Environment variables "ATOM_RELEASES_S3_KEY" and/or "ATOM_RELEASES_S3_SECRET" are not set, skipping S3 upload. Suggested new PR title for this version of the fix: "CI: Skip uploading artifacts and crash reports to paid services if API key/secret env vars aren't set" |
Going to upload (force push) the better version, the one that doesn't require us to set any variables. Bookmarking the old commit, so we don't lose the original design/first pass attempt at this PR: 0793a75 (For historical reference, an even older tip of this PR's branch: e68ddc2) Edit: Done. Force pushed to f748861. |
These env vars are credentials to authenticate to S3. If they're unset, we can't possibly upload to S3. So we shouldn't try to.
We can't authenticate to PackageCloud without an API key, so don't try to.
0793a75
to
f748861
Compare
I mean I feel silly being impassioned about this, since it doesn't matter dearly, but I worked hard on my version. The scripts we are editing right now are strictly part of CI. ( We would never see the I don't like having workarounds anywhere, but one place in the CI vs another place in the CI is all the same to me. So I am left wondering why I overwrite my code for another solution. I think it's an overboard type of solution... Hmm. But we will want to use the same thing for I'd rather not commit a quick workaround if they can get back to us soon. If they don't get back to us quickly then I'll move forward with your suggestion. By the way, I am validating that your suggestion works as intended. Once my testing happens I'll understand this workaround better. |
I appreciate your effort, and certainly, that is not wasted. The code caused me to see the real problem. I just changed the solution slightly. As a general note, if you could communicate the problem directly and compactly, I may be able to help in finding the solution, so you don't have to put so much time on it. |
I don't want to be the bearer of bad news, and I was actually pretty sure your solution would work, but I don't think you tried what happens when a var is unset on the Azure Pipelines side of things. In my tests, this doesn't fix the problem, it just changes what kind of output there is for unset variables from |
This reverts commit b5d6cb9.
Co-authored-by: Amin Yahyaabadi <[email protected]>
Hi again, so as I understand we want these qualities:
The last thing I have no idea how to do. |
For example, if we adequately pass the S3_KEY to the script, as needed by upstream... we won't get a blank var in JS. So we have to do that double check thing I coded up. I should make a minimal version of our real CI setup and see if |
All of these work as expected. I just showed you. Set:
Not set:
When a variable is not available, it does not matter that it wanted to be secret or not! |
I have been trying to run some version of our real CI with this, but sadly it fails for unrelated reasons in the Windows build step. Maybe I'll make a branch where I can experiment. I can disable Windows there just to see this work properly. |
For example, the Isn't macro (Secret is relevant, because that's why we need to pass in an |
Yes. Just try my example. #66 (comment) |
Your example in screenshots is working, so I just want to see Node running a I want to get this right, because when CI stops working in a way that I can't figure out how to debug, it's worse than having no CI. I have been there before on other projects and it's not fun. So just want to validate this in real-world-like conditions (Node running a JS script) then we should be good. Probably just needs this in a Node is installed already in the images. |
I tried this and haven't gotten it working quite yet. Here are my files: https://github.com/DeeDeeG/azure-pipelines-test/blob/c0e94a7c0b/console_log_js.yml Here is my output: SECRET_VAR is:undefined
NONSECRET_VAR is:hello this is not secret
UNSET_VAR is: Here are my vars: |
I am not sure what is going on with your example: Not set: trigger:
- master
pool:
vmImage: 'ubuntu-latest'
# Does not work without this:
variables:
secrectVar: $[ variables['secretVar'] ]
steps:
- pwsh: |
node test.js # https://github.com/aminya/threadsjs-test/blob/azure-pipelines/test.js
env:
secrectVar_Macro: $(secrectVar) |
You have two different vars. Is this intended? secretVar and secrectVar . I am trying it, but oddly it seems both need to be set. I am trying a few combinations. |
That's a typo. Overwriting the old variable might not work. Azure has taken not a smart design for its macro processing. It is very fragile and opinionated. But I remember overwriting the variables using this method. It should be fine. If overwriting the old variable does not work, you should rename them. For example, add |
(when "set |
Anyways, I don't want to rush for this PR for now. Please stop putting more time on this since it does not worth it. Azure should fix their broken syntax. Having workarounds although temporarily is not a good sign, and I don't consider it a good PR for the upstream repository. We have other things to improve and maintain which directly affect the end-user. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I have this working:
# minimal-variables-test.yml
# Modified from: https://developercommunity.visualstudio.com/content/problem/1151302/inconsistent-behavior-between-variablessecuredVar-a.html
variables:
secured_var_mapped: $[variables.SECURED_VAR]
public_var_Mapped: $[variables.PUBLIC_VAR]
unset_var_mapped: $[variables.UNSET_VAR]
steps:
- script: |
echo "SECURED_VAR is: $SECURED_VAR"
test "$SECURED_VAR" && echo "truthy" || echo "falsy"
echo
echo "PUBLIC_VAR is: $PUBLIC_VAR"
test "$PUBLIC_VAR" && echo "truthy" || echo "falsy"
echo
echo "UNSET_VAR is: $UNSET_VAR"
test "$UNSET_VAR" && echo "truthy" || echo "falsy"
env:
SECURED_VAR: $(secured_var_mapped)
PUBLIC_VAR: $(public_var_mapped)
UNSET_VAR: $(unset_var_mapped) yml code: https://github.com/DeeDeeG/azure-pipelines-test/blob/c9bfb19/minimal-variables-test.yml |
Closing this in favor of the local clone that allows me to edit: #99 Also, this has too many comments and it's hard to keep track of the things. |
Issue or RFC Endorsed by Atom's Maintainers
#1 (comment)
Description of the Change
script/vsts/upload-artifacts.js
script/vsts/upload-crash-reports.js
.deb
,.rpm
) to PackageCloud.io if API key isn't set, inscript/vsts/upload-artifacts.js
This is to stop these uploads from being attempted with misconfigured/blank credentials or upload destinations. Such misconfigured uploads error out and cause our CI to fail at the last step, after all tests have passed.
Alternate Designs
None.
Possible Drawbacks
None
Verification Process
CI is showing that these changes are working. See this comment for details: #66 (comment)
Release Notes
N/A