-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unauthorized cache warning #6765
Comments
If we could have the command ran and a diff between what happens in |
Not 100% sure how to supply this... but I'll give you what I can. turbo.json
Using 1.11 run
Using 1.10, the above is not an issue |
Can you confirm that on 1.10 that the remote cache gets written to? e.g. |
@chris-olszewski I can confirm running the above commands, using 1.10, the remote cache was written |
Getting a very similar error too after upgrading. I'm running
And if I run with |
Having bumped into this issue, I can also confirm this using ducktors / turborepo-remote-cache.
Moreover, it seems that this warning may not be remote caching issue, but a generic "artifact verification" related problem as I could just confirm the same bug with local filesystem caches:
The task I tested all the above with does not have any extra outputs, only stdout Hope that helps. |
Also getting this error in our repo here - https://github.com/sovereign-nature/deep |
@attila Also, can you confirm if this is still happening in v1.11.3? |
The issue is really random. However I am still able to reproduce it locally. See screenshot: On my private repo in bitbucket, interesting case that slug is suffixed to warning: On my public repo in github, runner log I get a different error: Could this be related? |
Getting |
Hey all, I'm still investigating this. If anyone has any artifacts that they're willing to share to help debug that'd be appreciated, since I'm having problems reproducing this on my end. @admmasters If you use the latest version of |
Yes it works perfectly with the |
@admmasters can you provide any details about the certificate your remote cache is using? Is it self-signed, or does it roll up to a custom root certificate? |
@gsoltis Its self signed yeah |
@admmasters Got it. Ok, I believe you have a separate issue from what's reported here. This is likely a difference in the default verification cert behavior of the http clients (cc @Zertsov @NicholasLYang) |
Hi @admmasters, could you open a new issue? And if possible, include the following details: Are you using a proxy? Can you validate that you are getting cache hits using the |
Hi all, we're continuing to look into this. If people who ran into issues could try again with
|
I did a quick test where in our monorepo:
Note that instead of the previously reported "artifact verification failed" warnings with 1.11.2 I have "tcp connect error: Bad file descriptor" now. Token, team and the remote cache URL are all provided via env vars TURBO_API=https://mylambda.lambda-url.eu-west-1.on.aws
TURBO_TEAM=team_my
TURBO_TOKEN=my_token I ran this on an Intel Mac OS 14.2.1 (23C71) |
@attila Interesting. My suspicion is that the credentials are fine, but something significant to your setup is different in the TLS stack between the Rust and Go implementations. If you're comfortable with it, can you email me ([email protected]) the domain name (please no credentials or private keys) that is hosting your remote cache? I'd like to try to confirm if we can make a TLS connection to it. Also, any details you're aware of about the TLS setup would be helpful. For instance: is the root cert for the domain signed by one of the default Certificate Authorities? Or do you have a custom root CA on your machine? |
@gsoltis I just sent an email with the full base URL of the service. Regarding TLS setup, I'm unaware of any exotic setup there, the service runs on a Lambda function URL, with the default hostname provided by AWS. |
@attila Thanks, we'll see what we can find. AFAICT there is nothing weird with the endpoint setup (just poking at it w/ |
@attila I've tried running Let's see if we can isolate the problem. Can you try with a new monorepo:
Note that the logs will include your domain name. You should get an error writing to the cache, but hopefully the more verbose logging will give us a clue where to look. |
Here's the sanitised log output from the command above
I observed that I am no longer getting errors and there is a remote cache hit on subsequent attempts (I cleaned up I am still trying to understand the differences between our own and the scaffolded monorepos, so far no luck. In the meantime, I took the liberty to run a task with verbose logging on our monorepo that shows the issue. The command I ran was a simple typescript "linting" command that only has the "^topo" task as a dependency. TURBO_TEAM=my-team TURBO_TOKEN=my-token TURBO_API=https://my-api-endpoint npm exec turbo -- typecheck -vv --filter='@client-project/util-logger' --remote-only Here's the sanitised log output from the monorepo where it does not work
I'll continue the investigation as to why this Bad file descriptor error occurs, but any pointers you see from the above logs is much appreciated, thank you. |
Still prevalent in 1.12 (and no --go-fallback option)
|
Continuing to look into this. However, we still have been unable to reproduce it, and it looks like the I do believe that the |
@ajwhitehead88 did we use ducktors / turborepo-remote-cache for our remote cache in the end? |
Yes we use the docker container for ours |
I've set up ducktors / turborepo-remote-cache locally and am still unable to reproduce the problem. However, from looking at the code, it appears the Can you check logs from your cache, or your storage backend, or possibly instrument your cache to see what the underlying error is? |
We are running into the same issue, on version
We've configured Vercel with What steps can we follow to debug this? |
### Description Enable the feature for using native certs and not just the ones shipped with `turbo`. See [this readme](https://github.com/rustls/rustls-native-certs?tab=readme-ov-file#should-i-use-this-or-webpki-roots) for a comparison between these features. If you compare the Go implementations ([linux](https://go.dev/src/crypto/x509/root_linux.go), [macos](https://go.dev/src/crypto/x509/root_darwin.go), [windows](https://go.dev/src/crypto/x509/root_windows.go)), this gets us closer to that behavior. Both `weppki-roots` and `rustls-native-certs` can be used at the same time and [both sources](https://docs.rs/reqwest/latest/src/reqwest/async_impl/client.rs.html#465) will be added to the [client when built](https://docs.rs/reqwest/latest/src/reqwest/async_impl/client.rs.html#482) I believe this should address #7317 and some reports in #6765 ### Testing Instructions Verified that new build still works with Vercel Remote Cache. Given that this feature is additive, I don't expect us to lose any functionality. Closes TURBO-2333
Following up on #6765 (comment) I wanted to test "tcp connect error: Bad file descriptor (os error 9)" more but I have a very limited understanding of it. Suspecting there are ignored files in my original workspace that may affect this, I cleaned up the workspace using Not quite understanding if the daemon has anything to do with this, I tried Any other ideas what to look into before I try to delete the entire original workspace and re-clone the repository? Is there a corrupted var or tmp folder that turbo relies on? |
@attila There is a config file at the XDG_CONFIG_DIR (on macos, it's under Glad to hear the clean checkout is working though. I wouldn't spend too much time debugging beyond that. This issue has collected a few different problems, and I don't think anyone else has reported this specific variant. |
I've encountered this issue only when using ducktors / turborepo-remote-cache with S3. We actually unhooked S3 just to get around the issue temporarily. |
Is there any alternatives to ducktors / turborepo-remote-cache that don't have this issue? |
I'm seeing this as well, 1.10.16 works but newer versions do not. Also using https://github.com/ducktors/turborepo-remote-cache |
Reporting that I'm seeing same error on current latest turbo v1.13.3. With
|
Seeing the same behaviour with the ducktors implementation of the remote cache. I did find a workaround though so it seems like a bug in the turbo. {
"teamid": "teamName"
"token": "tokenHere",
"apiurl": "https://somelambda.lambda-url.us-east-1.on.aws"
} Doesn't work:
server error: {
"severity": "WARNING",
"level": 40,
"time": 1715731148245,
"pid": 8,
"hostname": "169.254.75.181",
"reqId": "kthNhXIVSQGbtCCrFlxT8w-1",
"data": null,
"isBoom": true,
"isServer": false,
"output": {
"statusCode": 400,
"payload": {
"statusCode": 400,
"error": "Bad Request",
"message": "querystring should have required property 'teamId'"
},
"headers": {}
},
"stack": "Error: querystring should have required property 'teamId'\n at Object.handler (/var/task/index.js:909873:40)\n at preHandlerCallback (/var/task/index.js:4200:42)\n at preValidationCallback (/var/task/index.js:4188:9)\n at handler2 (/var/task/index.js:4158:11)\n at handleRequest (/var/task/index.js:4121:9)\n at runPreParsing (/var/task/index.js:41437:9)\n at next (/var/task/index.js:3853:11)\n at handleResolve (/var/task/index.js:3868:9)",
"type": "Error",
"message": "querystring should have required property 'teamId'"
} Does work:
So it seems like there's something funky around teamid vs team as adding |
I tried
On Next.js 14.3.0-canary.63 |
I wonder if the 3rd party cache connection code is a bit different than the Vercel one. On Turbo v 1.13.3 I ended up just moving everything to environment vars and removing the config.json. This makes it work for me without having to adjust every CLI turbo call. ~/.zshrc export TURBO_API=https://somelambda.lambda-url.us-east-1.on.aws
export TURBO_TEAM=name
export TURBO_TOKEN=token
... |
I was able to fix this error by creating a new token and making sure the scope was set to the correct team. I think I've somehow messed that up earlier. Many thanks to @chris-olszewski from the Turborepo team for his help! ❤️ |
This is a bug. We would appreciate reproductions if you have one to provide. 🙏
Discussed in #6740
Originally posted by dcantu96 December 7, 2023
Hello I just updated from 1.10.16 to 1.11.0 and I started to see the warning below when I ran a turbo command. Some other helpful notes are that our company hosts the remote cache servers and that the TURBO_TOKEN is set in the root
.env
file. It's been working without warnings until nowThe text was updated successfully, but these errors were encountered: