-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Retries due to periodic failure of underlying docker
commands (ex. rm
)?
#538
Comments
Are you running your tests concurrently? I wouldn't be surprised if there are race conditions within the |
Ah, so if this is a known issue then is the way to resolve this just to add some documentation to recommend switching to the experimental HTTP client method for now? |
I didn't say that it is a known issue but you can try the experimental client to narrow down which component causes the issue. |
Quite large refactoring as part of project revamp #563, and also the long-awaited refactoring #386 Now it's really simple API, e.g: ```rs let container = GenericImage::new("redis", "latest") .with_exposed_port(6379) .with_wait_for(WaitFor::message_on_stdout("Ready to accept connections")) .start(); ``` I find this new API much easier to use, solves a lot of problems, and seems flexible enough to be extended. This also works regardless of the tokio runtime flavor (multi-thread vs current-thread). And the sync API is still available, right under `blocking` feature (just like `reqwest` does). From a maintainer's perspective this also simplifies the code, we don't have to worry about two different clients and their differences. ### Docker host resolution The host is resolved in the following order: 1. Docker host from the `tc.host` property in the `~/.testcontainers.properties` file. 2. `DOCKER_HOST` environment variable. 3. Docker host from the "docker.host" property in the `~/.testcontainers.properties` file. 4. Else, the default Docker socket will be returned. ### Notes - MSRV was bumped to `1.70` in order to use `std::sync::OnceLock`. This should NOT be a problem, tests usually executed on more recent versions (also see [this ref](https://github.com/testcontainers/testcontainers-rs/pull/503/files#r1242651354)). - `Cli` client is removed, instead we provide `sync` (under `blocking` feature) and `async` impls based on HTTP client (bollard) - tested with [modules](https://github.com/testcontainers/testcontainers-rs-modules-community) ## Migration guide - Sync API migration (`Cli` client) - Add `blocking` feature - Drop all usages of `clients::Cli` - Add `use testcontainers::runners::SyncRunner;` - Replace `client.run(image)` with `image.start()` - Async API migration (`Http` client) - Remove `experimental` feature - Drop all usages of `clients::Http` - Add `use testcontainers::runners::AsyncRunner;` - Replace `client.run(image)` with `image.start()` ## References Closes #386 Closes #326 Closes #475 Closes #508 Closes #392 Closes #561 Closes #559 Closes #564 Closes #538 Closes #507 Closes #89 Closes #198
As always thanks for the awesome library, it's been incredibly useful for testing.
I've been doing some stress-testing on my test suite (i.e. running the tests continuously until one failed) lately and found that sometimes the
Cli
actually fails to perform some lower leveldocker
CLI commands.The first failure I encountered was a failure with creating a container, but unfortunately I didn't have
--nocapture
on, so I couldn't get the output. After repeating the process I found that I got a failure:I've anonymized the details of the project and test suite, but it should be clear that the failure was inside (but not the fault of)
testcontainers
.Looking at the output of my
docker
systemd service, I see a failure to write stderr (emphasis via spacing added below):After this error came up I restarted the test suite and it worked just fine -- the lower level failure seems transient.
Does it make sense to add error detection and/or a dumb retry policy at this level to the underlying client? I'm not sure if there's a better way to handle this, and unfortunately I didn't increase the log level on
docker
so it wasn't more specific on why it failed (like it has been for others).The text was updated successfully, but these errors were encountered: