Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI: Skip uploading artifacts and crash reports to paid services if API key/secret env vars aren't set #66

Conversation

DeeDeeG
Copy link
Member

@DeeDeeG DeeDeeG commented Jul 25, 2020

Issue or RFC Endorsed by Atom's Maintainers

#1 (comment)

Description of the Change

  • Skip uploading build artifacts to Amazon S3 if no credentials are set, in script/vsts/upload-artifacts.js
    • Similarly, skip uploading crash reports to Amazon S3 if credentials aren't set, in script/vsts/upload-crash-reports.js
  • Skip uploading Linux packages (.deb, .rpm) to PackageCloud.io if API key isn't set, in script/vsts/upload-artifacts.js

This is to stop these uploads from being attempted with misconfigured/blank credentials or upload destinations. Such misconfigured uploads error out and cause our CI to fail at the last step, after all tests have passed.

Alternate Designs

None.

Possible Drawbacks

None

Verification Process

CI is showing that these changes are working. See this comment for details: #66 (comment)

Release Notes

N/A

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Jul 25, 2020

@aminya please take a look.

If you like this, or have any suggestions for improvement, this PR can give us green CI on the "Release Branch Build" and "Nightly Release" pipelines.

Since this is one of those things that benefits forks in general, we might consider how best to format and code this for posting as a PR to upstream.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Jul 25, 2020

The reason I made the slightly funky "skip" option for S3 is that if upstream merges this, minus the commits that turn it off, it won't disrupt their release flow.

If we can convince them to change their CI config, then we have more options for how to format this.

Particularly, you can't check if a secret variable in Azure Pipelines is empty very easily, as the variable tends to expand to its full name, literally a string containing $(VAR_NAME) rather than a truly empty string. If we check for a secret variable being empty, such as S3 authentication keys, it requires us and any forks to explicitly set that var empty rather than being able to leave it unset.

That's why I made this on by default in the JS, but you can disable it by setting a flag in the pipeline .yml files.

@DeeDeeG DeeDeeG force-pushed the CI-skip-paid-service-release-artifact-uploads branch from fb922f2 to 5440888 Compare July 26, 2020 16:05
@DeeDeeG
Copy link
Member Author

DeeDeeG commented Jul 26, 2020

Updated.


It occurs to me these need not be flags. We can simply check an environment variable (example names: $SKIP_UPLOAD_TO_S3 and $SKIP_UPLOAD_TO_PACKAGECLOUD) and leave the flags alone.

That would make the change for this PR exclusively live in script/vsts/upload-artifacts.js. No need to change the .yml files.

@aminya Would you approve of this approach?

(The variable names SKIP_UPLOAD_TO... vs SHOULD_UPLOAD_TO... aren't super important, but I feel SKIP_UPLOAD_TO... better conveys that uploading is enabled by default. If we can convince upstream to make uploading to paid services disabled by default, then the variable names should be SHOULD_UPLOAD_TO....)

@aminya aminya added the CI label Jul 26, 2020
@DeeDeeG DeeDeeG force-pushed the CI-skip-paid-service-release-artifact-uploads branch from 5440888 to 47fe493 Compare July 29, 2020 17:07
@DeeDeeG DeeDeeG marked this pull request as draft July 29, 2020 19:35
@DeeDeeG DeeDeeG force-pushed the CI-skip-paid-service-release-artifact-uploads branch 2 times, most recently from d145d8e to f2b6f8d Compare July 29, 2020 22:04
@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 7, 2020

The UploadArtifacts job passes: https://dev.azure.com/atomcommunity/atomcommunity/_build/results?buildId=467&view=logs&j=fc7b8a14-4c68-54ee-1e90-04bf630ba360

Edit: Hmm, it doesn't even try to run the upload-artifacts script on most branches. Have to try this on:

  • A branch named xx.yy-releases, or on the Nightly pipeline, or with env var IS_RELEASE_BRANCH set to a JS "true" value, AND with the variable Atom.AutoDraftRelease set to a JS "true" value.
    • For the "Create Draft Release" script task
  • A branch named electron-xyz (or master), or with the env var IS_SIGNED_ZIP_BRANCH set to a JS "true" value.
    • For the "Upload CI Artifacts to S3" task.
  • On the "Nightly Release" pipeline
    • For the "Create Nightly Release" task

Edit 2: Testing these scenarios on my fork.

Copy link
Member Author

@DeeDeeG DeeDeeG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hoping to squash the changes to something more reasonable before merging.

@DeeDeeG DeeDeeG force-pushed the CI-skip-paid-service-release-artifact-uploads branch 2 times, most recently from e68ddc2 to b05628d Compare August 7, 2020 19:51
@aminya

This comment has been minimized.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 12, 2020

I made an upstream issue to try to get a discussion there, but it;s only been a couple of days, so they haven't had much time to respond... Still, I think they have other things on their plates at the moment keeping them busy.

+1 to move forward on this here.

I might want to check for some edge cases or additional scripts I missed, but honestly I found there are a lot of hard-coded URLs to the atom/atom repo, etc. I don't think I can get to that all in one PR without it taking forever. This is a good enough start.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 13, 2020

I came up with a different approach you might want to take a look at: master...DeeDeeG:WIPPPPP

Just skip if S3 credential env vars aren't set. And then we can disable all the S3 and Linux (PackageCloud.io) uploads in yml and JS. I'm unclear of upstream wants these toggles, and they're honestly a bit tricky to get right, so we could bluntly disable them here and just revert to what upstream has some time in the future if we want S3 uploads again.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 13, 2020

Here's something I realized we can do, a fully realized version of the WIPPPPP branch I posted above.

master...DeeDeeG:WIPPPPP2

If there is an unset Azure DevOps variable passed to a step with env: blocks of the yaml, it will show up in the environment as literally $(VAR_NAME_HERE). Since this useless value is predictable, we can check for it to determine that the variable was unset.

So basically we can just do a sanity check that we have credentials to authenticate to these services, otherwise skip uploading to them.

That's a way to do this without touching the YAML, and it should be reasonable enough for upstream too, but even if upstream won't take this, it works for us/forks without having to set any new environment variables. No new documentation for a new variable needed, less mental load for forks setting up their CIs, more automatic.

At this point I am happy with how this looks. If CI passes I want to overwrite this PR with that approach, change the PR title to "Skip uploading to paid services if credential env vars not set" or something, and we can merge it.


Update: Working as intended, other than some errors because we have no existing Nightly releases posted: https://dev.azure.com/DeeDeeG/b/_build/results?buildId=267&view=logs&j=8d802004-fbbb-5f17-b73e-f23de0c1dec8&t=18812538-7d35-526e-8e7b-36b6ab8ed5eb&l=15

Particularly this bit:

Environment variables "ATOM_RELEASES_S3_KEY" and/or "ATOM_RELEASES_S3_SECRET" are not set, skipping S3 upload.

Environment variable "PACKAGE_CLOUD_API_KEY" is not set, skipping PackageCloud upload.

That's the Nightly pipeline, see also this run of the Release Branch pipeline, working as intended:

https://dev.azure.com/DeeDeeG/b/_build/results?buildId=266&view=logs&j=fc7b8a14-4c68-54ee-1e90-04bf630ba360&t=9ad4a9bf-2116-54e4-db45-9da1e971fb4f&l=15

Environment variables "ATOM_RELEASES_S3_KEY" and/or "ATOM_RELEASES_S3_SECRET" are not set, skipping S3 upload.

Suggested new PR title for this version of the fix: "CI: Skip uploading artifacts and crash reports to paid services if API key/secret env vars aren't set"

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 14, 2020

Going to upload (force push) the better version, the one that doesn't require us to set any variables.

Bookmarking the old commit, so we don't lose the original design/first pass attempt at this PR: 0793a75

(For historical reference, an even older tip of this PR's branch: e68ddc2)

Edit: Done. Force pushed to f748861.

These env vars are credentials to authenticate to S3.
If they're unset, we can't possibly upload to S3.
So we shouldn't try to.
We can't authenticate to PackageCloud
without an API key, so don't try to.
@DeeDeeG DeeDeeG force-pushed the CI-skip-paid-service-release-artifact-uploads branch from 0793a75 to f748861 Compare August 14, 2020 18:25
@DeeDeeG DeeDeeG changed the title CI: Make uploading artifacts to paid services configurable, and explicitly disable them for our fork CI: Skip uploading artifacts and crash reports to paid services if API key/secret env vars aren't set Aug 14, 2020
@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 16, 2020

I mean I feel silly being impassioned about this, since it doesn't matter dearly, but I worked hard on my version.

The scripts we are editing right now are strictly part of CI. (script/vsts/upload-*.js). The way I had set it up was a bit more verbose (an extra line of code per var), but it would work outside of Azure. The workaround only helps us on Azure, but it doesn't hurt anywhere.

We would never see the $(VAR_NAME) value outside of Azure, and that's no problem. We are checking that that value is NOT set to that. That check behaves correctly outside of Azure, too. Checking for it not to be that weird value that only Azure would ever set && checking that the var is a truthy value, that is a valid check anywhere. It will still work outside of Azure, no problem.

I don't like having workarounds anywhere, but one place in the CI vs another place in the CI is all the same to me. So I am left wondering why I overwrite my code for another solution. I think it's an overboard type of solution... Hmm. But we will want to use the same thing for script/build too, so I begin to see your point.

I'd rather not commit a quick workaround if they can get back to us soon. If they don't get back to us quickly then I'll move forward with your suggestion.

By the way, I am validating that your suggestion works as intended. Once my testing happens I'll understand this workaround better.

@aminya
Copy link
Member

aminya commented Aug 16, 2020

I appreciate your effort, and certainly, that is not wasted. The code caused me to see the real problem. I just changed the solution slightly.

As a general note, if you could communicate the problem directly and compactly, I may be able to help in finding the solution, so you don't have to put so much time on it.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 16, 2020

I don't want to be the bearer of bad news, and I was actually pretty sure your solution would work, but I don't think you tried what happens when a var is unset on the Azure Pipelines side of things.

In my tests, this doesn't fix the problem, it just changes what kind of output there is for unset variables from $(VAR) to $[ variables['VAR] ]

https://dev.azure.com/DeeDeeG/b/_build/results?buildId=337&view=logs&j=12f1170f-54f2-53f3-20dd-22fc7dff55f9&t=bd05475d-acb5-5619-3ccb-c46842dbc997

@aminya
Copy link
Member

aminya commented Aug 17, 2020

The Macro syntax works fine:
No var:
image

With secret var:
image

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'
  
# Does not work without this:
variables:
  secrectVarCopy: $[ variables['secretVar'] ]

steps:
- pwsh: |      
    echo secrectVar_Macro         $env:secrectVar_Macro
    echo secrectVar_CompileTime   $env:secrectVar_CompileTime
    echo secrectVar_Runtime       $env:secrectVar_Runtime

  env:
    secrectVar_Macro: $(secrectVarCopy)
    secrectVar_CompileTime: ${{variables.secrectVarCopy}}
    secrectVar_Runtime: $[ variables['secrectVarCopy'] ]

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 17, 2020

Hi again,

so as I understand we want these qualities:

  • Secret vars that are SET will be passed to scripts
  • Non-secret vars that are SET will be passed to scripts
  • Non-secret vars that are UNSET will appear blank or unset to scripts
  • Vars passed as if secret, but UNSET in Azure UI will appear blank or unset to scripts

The last thing I have no idea how to do.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 17, 2020

For example, if we adequately pass the S3_KEY to the script, as needed by upstream... we won't get a blank var in JS. So we have to do that double check thing I coded up.

I should make a minimal version of our real CI setup and see if upload-artifacts.js skips, because if it does then all this theory doesn't matter.

@aminya
Copy link
Member

aminya commented Aug 17, 2020

All of these work as expected. I just showed you.

Set:

  • Secret vars passed as is
  • Non-secret vars passed as is

Not set:

  • Empty string will be passed

When a variable is not available, it does not matter that it wanted to be secret or not!

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 17, 2020

I have been trying to run some version of our real CI with this, but sadly it fails for unrelated reasons in the Windows build step. Maybe I'll make a branch where I can experiment. I can disable Windows there just to see this work properly.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 17, 2020

For example, the $(macro) syntax passed in an env: block never produces a blank var on the script side. It produces the string'$(SOME_VAR_NAME_HERE)'

Isn't macro $(VAR) the syntax you were proposing to use? Maybe I am truly misunderstanding.

(Secret is relevant, because that's why we need to pass in an env: block. Or we break upstream's use case.)

@aminya
Copy link
Member

aminya commented Aug 17, 2020

Yes. Just try my example. #66 (comment)

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 17, 2020

Your example in screenshots is working, so I just want to see Node running a js script, and acting like we expect CI to. I'll see if I can code up some minimal test for that, shouldn't be hard. Will take me some time, but hopefully not long.

I want to get this right, because when CI stops working in a way that I can't figure out how to debug, it's worse than having no CI. I have been there before on other projects and it's not fun. So just want to validate this in real-world-like conditions (Node running a JS script) then we should be good.

Probably just needs this in a script: step: node script.js

Node is installed already in the images.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 17, 2020

@aminya
Copy link
Member

aminya commented Aug 17, 2020

I am not sure what is going on with your example:

Set:
image

Not set:

image

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'
  
# Does not work without this:
variables:
  secrectVar: $[ variables['secretVar'] ]

steps:
- pwsh: |     
    node test.js   # https://github.com/aminya/threadsjs-test/blob/azure-pipelines/test.js

  env:
    secrectVar_Macro: $(secrectVar)

DeeDeeG added a commit to DeeDeeG/atom that referenced this pull request Aug 17, 2020
@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 17, 2020

You have two different vars. Is this intended? secretVar and secrectVar .

I am trying it, but oddly it seems both need to be set. I am trying a few combinations.

@aminya
Copy link
Member

aminya commented Aug 17, 2020

You have two different vars. Is this intended? secretVar and secrectVar .

I am trying it, but oddly it seems both need to be set. I am trying a few combinations.

That's a typo. Overwriting the old variable might not work. Azure has taken not a smart design for its macro processing. It is very fragile and opinionated. But I remember overwriting the variables using this method. It should be fine.

If overwriting the old variable does not work, you should rename them. For example, add _ before them to show that these are intermediate variables, and once Azure fixes the bugs, these should be removed.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 17, 2020

secretVar secrectVar output
unset unset node:
set regular unset node: regular
set regular set regular2 node: regular
unset set regular2 node:
unset set secret2 node:
set regular set secret2 node: ***
set secret set secret2 node: ***
set secret set regular2 node: ***
set secret unset node: ***

(when "set regular", that means I literally put the value in as regular, and not as a secret var.
When "set secret2", that means I literally put the value secret2 and made it a secret variable. unset is just unset.)

@aminya
Copy link
Member

aminya commented Aug 17, 2020

Anyways, I don't want to rush for this PR for now. Please stop putting more time on this since it does not worth it. Azure should fix their broken syntax. Having workarounds although temporarily is not a good sign, and I don't consider it a good PR for the upstream repository.

We have other things to improve and maintain which directly affect the end-user.

@DeeDeeG

This comment has been minimized.

@DeeDeeG

This comment has been minimized.

@DeeDeeG
Copy link
Member Author

DeeDeeG commented Aug 18, 2020

I have this working:

SECURED_VAR is: ***
truthy

PUBLIC_VAR is: thisisquitepublic
truthy

UNSET_VAR is: 
falsy

CI link:
https://dev.azure.com/DeeDeeG/pipelines-tests/_build/results?buildId=546&view=logs&j=12f1170f-54f2-53f3-20dd-22fc7dff55f9&t=9c939e41-62c2-5605-5e05-fc3554afc9f5


# minimal-variables-test.yml
# Modified from: https://developercommunity.visualstudio.com/content/problem/1151302/inconsistent-behavior-between-variablessecuredVar-a.html

variables:
  secured_var_mapped:  $[variables.SECURED_VAR]
  public_var_Mapped:  $[variables.PUBLIC_VAR]
  unset_var_mapped:  $[variables.UNSET_VAR]

steps:
- script: |
    echo "SECURED_VAR is: $SECURED_VAR"
    test "$SECURED_VAR" && echo "truthy" || echo "falsy"

    echo
    echo "PUBLIC_VAR is: $PUBLIC_VAR"
    test "$PUBLIC_VAR" && echo "truthy" || echo "falsy"

    echo
    echo "UNSET_VAR is: $UNSET_VAR"
    test "$UNSET_VAR" && echo "truthy" || echo "falsy"

  env:
    SECURED_VAR: $(secured_var_mapped)
    PUBLIC_VAR: $(public_var_mapped)
    UNSET_VAR: $(unset_var_mapped)

yml code: https://github.com/DeeDeeG/azure-pipelines-test/blob/c9bfb19/minimal-variables-test.yml

@aminya aminya mentioned this pull request Aug 21, 2020
@aminya
Copy link
Member

aminya commented Aug 21, 2020

Closing this in favor of the local clone that allows me to edit: #99

Also, this has too many comments and it's hard to keep track of the things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants