-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single container start for whole integration test file #707
Comments
Hi @Tockra 👋 I think proper solution requires support of #577 For now I have one workaround in mind, not ideal - but it should work. I'll share a bit later. Also you could consider custom test harness (or some ready ones), e.g: https://www.infinyon.com/blog/2021/04/rust-custom-test-harness/ |
Okay thank you. I'm looking forward to your solution. Currently I have this solution: static KEYCLOAK: OnceCell<(ContainerAsync<GenericImage>, String)> = OnceCell::const_new();
async fn start_auth_container() -> &'static (ContainerAsync<GenericImage>, String) {
KEYCLOAK
.get_or_init(|| async { start_keycloak_container().await })
.await
}
#[tokio::test]
async fn test1(
) {
let keycloak = start_auth_container().await;
println!("{:?}", keycloak.1);
//let app = start_app("some").await;
}
#[tokio::test]
async fn test2(
) {
let keycloak = start_auth_container().await;
println!("{:?}", keycloak.1);
//let app = start_app("some").await;
}
#[tokio::test]
async fn test3(
) {
let keycloak = start_auth_container().await;
println!("{:?}", keycloak.1);
//let app = start_app("some").await;
} But the problem is, that the containers does not stop after the test suite execution, which is very annoying. |
As a quick workaround, you might consider using something like this (your code was used for demonstration): static KEYCLOAK: OnceLock<Mutex<Weak<(ContainerAsync<GenericImage>, String)>>> = OnceLock::new();
async fn start_auth_container() -> Arc<ContainerAsync<GenericImage>, String> {
let mut guard = KEYCLOAK
.get_or_init(|| Mutex::new(Weak::new()))
.lock()
.await;
if let Some(container) = guard.upgrade() {
Ok(container)
} else {
let container = Arc::new(start_keycloak_container().await);
*guard = Arc::downgrade(&client);
Ok(container)
}
} There are main points:
The issue with this approach: it depends on the parallelism of the tests, when executed sequentially, a new container will be created each time. |
Hm okay. For the beginning this looks nice. But I don't really understand what you mean with " it depends on the parallelism of the tests," . |
I meant that if these tests are run concurrently, then in this case they will only use 1 instance of the container with the proposed solution. Because there is an active But on the other hand, if you run with But usually (and by default) tests are executed in parallel, so this solution should be suitable for most cases, until the resource reaper is completed. |
Hello, I've ran into the same issue and I can confirm that @DDtKey's solution using Regarding the first proposed solution using
If you need serial test execution using static CONTAINER: OnceCell<ContainerAsync<GenericImage>> = OnceCell::const_new();
// get_or_init of CONTAINER as you have in your snippet
#[dtor]
fn on_shutdown() {
let container_id = CONTAINER.get().map(|c| c.id())
.expect("failed to get container id");
std::process::Command::new("docker")
.args(["container", "rm", "-f", container_id])
.output()
.expect("failed to stop testcontainer");
} |
Thank you rosvit. Your solution is perfect for my usecase. Because we don't use a async call here, everything works fine. |
Another WA is to make cleanup synchronous without manual calls to docker.
tokio = { version = "1.39", features = ["rt-multi-thread", "macros", "sync"] }
fn channel<T>() -> Channel<T> {
let (tx, rx) = mpsc::channel(32);
Channel { tx, rx: Mutex::new(rx) }
}
fn clean_up() {
SRV_INPUT.tx.blocking_send(ContainerCommands::Stop).unwrap();
SRC_STOP.rx.blocking_lock().blocking_recv().unwrap();
}
#[ctor::dtor]
fn on_destroy() {
clean_up();
} |
What is SRV_INPUT and SRC_STOP? |
It's same as In my case RUN_FINISHED uses same channel type, i didn't check on possibility of using blocking for tokio::sync::Notify |
Based on observation, localstack containers weren't being shutdown properly. It turns out static bindings don't get shut down, so we need testcontainers/testcontainers-rs#707 (comment) unti testcontainers/testcontainers-rs#577. Signed-off-by: lloydmeta <[email protected]>
Based on observation, localstack containers weren't being shutdown properly. It turns out static bindings don't get shut down, so we need testcontainers/testcontainers-rs#707 (comment) unti testcontainers/testcontainers-rs#577. Signed-off-by: lloydmeta <[email protected]>
Thanks @symbx , I built on your hints and what @Tockra had in the original post and it works like a charm A complete working example is at https://github.com/lloydmeta/miniaturs/blob/d244760f5039a15450f5d4566ffe52d19d427771/server/src/test_utils/mod.rs#L12-L113 |
Hello,
I'm writing integration tests in Rust, where I want to test the HTTP endpoints of my application. For this purpose, I need to mock the Keycloak login server, as I did not build a test mode into my application. To achieve this, I decided to start a real Keycloak server for my integration tests to ensure everything works as expected. The same I did for the database.
To do this, I need to start a Docker container with a specific image and run some setup scripts. However, starting a Docker container for each test is time-consuming, specially the keycloak auth container needs 10 seconds to receive connections, so I want to start a single auth container for all tests in a file. So 10 test methods needs only to wait one time for keycloak and not 10 times (10 x 10 seconds = 100 seconds test execution).
Previously, I found a solution that worked for a long time but now does not:
This code is placed in the
db_container
module. When I usemod db_container
in my integration test files, it sets up and starts the container for all tests in the current file. Usingget_mongodb_connection_string()
, I can get the connection string to feed into my application.However, I now receive this error on
dtor
:The problem appears to be the
clean_up()
function, which causes this error even when its content is empty.I'm reaching out to see if anyone using the
testcontainers
crate has a smart solution for this issue. Any insights or alternative approaches would be greatly appreciated!Thank you!
The text was updated successfully, but these errors were encountered: