Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single container start for whole integration test file #707

Open
Tockra opened this issue Jul 24, 2024 · 11 comments
Open

Single container start for whole integration test file #707

Tockra opened this issue Jul 24, 2024 · 11 comments

Comments

@Tockra
Copy link

Tockra commented Jul 24, 2024

Hello,

I'm writing integration tests in Rust, where I want to test the HTTP endpoints of my application. For this purpose, I need to mock the Keycloak login server, as I did not build a test mode into my application. To achieve this, I decided to start a real Keycloak server for my integration tests to ensure everything works as expected. The same I did for the database.

To do this, I need to start a Docker container with a specific image and run some setup scripts. However, starting a Docker container for each test is time-consuming, specially the keycloak auth container needs 10 seconds to receive connections, so I want to start a single auth container for all tests in a file. So 10 test methods needs only to wait one time for keycloak and not 10 times (10 x 10 seconds = 100 seconds test execution).

Previously, I found a solution that worked for a long time but now does not:

use ctor::{ctor, dtor};
use lazy_static::lazy_static;
use log::debug;
use mongodb::{
    bson::{doc, oid::ObjectId},
    options::{ClientOptions, UpdateModifications},
    Client, Collection,
};
use serde::Serialize;
use std::{env, thread};
use testcontainers::runners::AsyncRunner;
use testcontainers::{core::Mount, ContainerRequest, GenericImage, ImageExt};
use tokio::sync::Notify;

use common::{channel, execute_blocking, Channel, ContainerCommands};
#[path = "../common/mod.rs"]
pub mod common;

lazy_static! {
    static ref MONGODB_IN: Channel<ContainerCommands> = channel();
    static ref MONGODB_CONNECTION_STRING: Channel<String> = channel();
    static ref RUN_FINISHED: Notify = Notify::new();
}

#[ctor]
fn on_startup() {
    thread::spawn(|| {
        execute_blocking(start_mongodb());
        // This needs to be here otherwise the MongoDB container did not call the drop function before the application stops
        RUN_FINISHED.notify_one();
    });
}

#[dtor]
fn on_shutdown() {
    execute_blocking(clean_up());
}

async fn clean_up() {
    MONGODB_IN.tx.send(ContainerCommands::Stop).unwrap();

    // Wait until Docker is successfully stopped
    RUN_FINISHED.notified().await;
    debug!("MongoDB stopped.")
}

async fn start_mongodb() {
    let mongodb = get_mongodb_image().start().await.unwrap();
    let port = mongodb.get_host_port_ipv4(27017).await.unwrap();
    debug!("MongoDB started on port {}", port);
    let mut rx = MONGODB_IN.rx.lock().await;
    while let Some(command) = rx.recv().await {
        debug!("Received container command: {:?}", command);
        match command {
            ContainerCommands::FetchConnectionString => MONGODB_CONNECTION_STRING
                .tx
                .send(format!("mongodb://localhost:{}", port))
                .unwrap(),
            ContainerCommands::Stop => {
                mongodb.stop().await.unwrap();
                rx.close();
            }
        }
    }
}

fn get_mongodb_image() -> ContainerRequest<GenericImage> {
    let mount = Mount::bind_mount(
        format!(
            "{}/../../../../tests/docker-setup/mongo-init.js",
            get_current_absolute_path()
        ),
        "/docker-entrypoint-initdb.d/mongo-init.js",
    );
    GenericImage::new("mongo", "7.0.7")
        .with_cmd(["mongod", "--replSet", "rs0", "--bind_ip", "0.0.0.0"])
        .with_mount(mount)
}

fn get_current_absolute_path() -> String {
    match env::current_exe() {
        Ok(path) => {
            let path_str = path.to_string_lossy().into_owned();
            path_str
        }
        Err(_) => "/".to_string(),
    }
}

pub async fn get_mongodb_connection_string() -> String {
    MONGODB_IN
        .tx
        .send(ContainerCommands::FetchConnectionString)
        .unwrap();
    MONGODB_CONNECTION_STRING
        .rx
        .lock()
        .await
        .recv()
        .await
        .unwrap()
}

This code is placed in the db_container module. When I use mod db_container in my integration test files, it sets up and starts the container for all tests in the current file. Using get_mongodb_connection_string(), I can get the connection string to feed into my application.

However, I now receive this error on dtor:

thread '<unnamed>' panicked at library/std/src/thread/mod.rs:741:19:
use of std::thread::current() is not possible after the thread's local data has been destroyed
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
fatal runtime error: failed to initiate panic, error 5

The problem appears to be the clean_up() function, which causes this error even when its content is empty.

I'm reaching out to see if anyone using the testcontainers crate has a smart solution for this issue. Any insights or alternative approaches would be greatly appreciated!

Thank you!

@DDtKey
Copy link
Collaborator

DDtKey commented Jul 24, 2024

Hi @Tockra 👋

I think proper solution requires support of #577
So this should allow to define static/oncecell containers.

For now I have one workaround in mind, not ideal - but it should work. I'll share a bit later.

Also you could consider custom test harness (or some ready ones), e.g: https://www.infinyon.com/blog/2021/04/rust-custom-test-harness/

@Tockra
Copy link
Author

Tockra commented Jul 24, 2024

Okay thank you. I'm looking forward to your solution.

Currently I have this solution:

static KEYCLOAK: OnceCell<(ContainerAsync<GenericImage>, String)> = OnceCell::const_new();

async fn start_auth_container() -> &'static (ContainerAsync<GenericImage>, String) {
    KEYCLOAK
        .get_or_init(|| async { start_keycloak_container().await })
        .await
}


#[tokio::test]
async fn test1(
) {
    let keycloak = start_auth_container().await;

    println!("{:?}", keycloak.1);
    //let app = start_app("some").await;
}

#[tokio::test]
async fn test2(
) {
    let keycloak = start_auth_container().await;

    println!("{:?}", keycloak.1);
    //let app = start_app("some").await;
}

#[tokio::test]
async fn test3(
) {
    let keycloak = start_auth_container().await;

    println!("{:?}", keycloak.1);
    //let app = start_app("some").await;
}

But the problem is, that the containers does not stop after the test suite execution, which is very annoying.

@DDtKey
Copy link
Collaborator

DDtKey commented Jul 24, 2024

As a quick workaround, you might consider using something like this (your code was used for demonstration):

static KEYCLOAK: OnceLock<Mutex<Weak<(ContainerAsync<GenericImage>, String)>>> = OnceLock::new();

async fn start_auth_container() -> Arc<ContainerAsync<GenericImage>, String> {
        let mut guard = KEYCLOAK
            .get_or_init(|| Mutex::new(Weak::new()))
            .lock()
            .await;

        if let Some(container) = guard.upgrade() {
            Ok(container)
        } else {
            let container = Arc::new(start_keycloak_container().await);
            *guard = Arc::downgrade(&client);

            Ok(container)
        }
}

There are main points:

  • it uses Weak in order not to prevent Drop of being called
  • we initialize a new instance only if there is not one already in use

The issue with this approach: it depends on the parallelism of the tests, when executed sequentially, a new container will be created each time.

@Tockra
Copy link
Author

Tockra commented Jul 24, 2024

Hm okay. For the beginning this looks nice. But I don't really understand what you mean with " it depends on the parallelism of the tests," .
Currently I've 3 tests annotated with tokio::test and all these tests causes a new docker container. But this is far away from my ideal solution, because I explicitly want only one container !?

@DDtKey
Copy link
Collaborator

DDtKey commented Jul 24, 2024

But I don't really understand what you mean with " it depends on the parallelism of the tests,"

I meant that if these tests are run concurrently, then in this case they will only use 1 instance of the container with the proposed solution. Because there is an active Arc and a Weak can be upgraded.

But on the other hand, if you run with --test-threads=1 or the tests for some reason have long pauses between them - they will most likely start a new container each time.

But usually (and by default) tests are executed in parallel, so this solution should be suitable for most cases, until the resource reaper is completed.

@rosvit
Copy link

rosvit commented Aug 15, 2024

Hello, I've ran into the same issue and I can confirm that @DDtKey's solution using OnceLock /Weak is working for parallel tests.

Regarding the first proposed solution using OnceCell:

But the problem is, that the containers does not stop after the test suite execution, which is very annoying.

If you need serial test execution using #[serial], workaround could be to forcibly stop and remove docker container in #[dtor]:

static CONTAINER: OnceCell<ContainerAsync<GenericImage>> = OnceCell::const_new();

// get_or_init of CONTAINER as you have in your snippet

#[dtor]
fn on_shutdown() {
    let container_id = CONTAINER.get().map(|c| c.id())
        .expect("failed to get container id");
    std::process::Command::new("docker")
        .args(["container", "rm", "-f", container_id])
        .output()
        .expect("failed to stop testcontainer");
}

@Tockra
Copy link
Author

Tockra commented Aug 15, 2024

Hello, I've ran into the same issue and I can confirm that @DDtKey's solution using OnceLock /Weak is working for parallel tests.

Regarding the first proposed solution using OnceCell:

But the problem is, that the containers does not stop after the test suite execution, which is very annoying.

If you need serial test execution using #[serial], workaround could be to forcibly stop and remove docker container in #[dtor]:

static CONTAINER: OnceCell<ContainerAsync<GenericImage>> = OnceCell::const_new();

// get_or_init of CONTAINER as you have in your snippet

#[dtor]
fn on_shutdown() {
    let container_id = CONTAINER.get().map(|c| c.id())
        .expect("failed to get container id");
    std::process::Command::new("docker")
        .args(["container", "rm", "-f", container_id])
        .output()
        .expect("failed to stop testcontainer");
}

Thank you rosvit. Your solution is perfect for my usecase. Because we don't use a async call here, everything works fine.
Before I ran with dtor into mmastrac/rust-ctor#304 but without using tokio async stuff it works ;)

@symbx
Copy link

symbx commented Aug 25, 2024

Another WA is to make cleanup synchronous without manual calls to docker.

  • Add sync feature to tokio:
tokio = { version = "1.39", features = ["rt-multi-thread", "macros", "sync"] }
  • Replace unbounded channels with bounded:
fn channel<T>() -> Channel<T> {
    let (tx, rx) = mpsc::channel(32);
    Channel { tx, rx: Mutex::new(rx) }
}
  • Make cleanup method sync:
fn clean_up() {
    SRV_INPUT.tx.blocking_send(ContainerCommands::Stop).unwrap();
    SRC_STOP.rx.blocking_lock().blocking_recv().unwrap();
}
  • And finally make destructor sync:
#[ctor::dtor]
fn on_destroy() {
    clean_up();
}

@Tockra
Copy link
Author

Tockra commented Aug 26, 2024

fn clean_up() {
    SRV_INPUT.tx.blocking_send(ContainerCommands::Stop).unwrap();
    SRC_STOP.rx.blocking_lock().blocking_recv().unwrap();
}

What is SRV_INPUT and SRC_STOP?

@symbx
Copy link

symbx commented Aug 26, 2024

It's same as
- MONGODB_IN
- RUN_FINISHED

In my case RUN_FINISHED uses same channel type, i didn't check on possibility of using blocking for tokio::sync::Notify

lloydmeta added a commit to lloydmeta/miniaturs that referenced this issue Oct 31, 2024
Based on observation, localstack containers weren't being shutdown
properly.

It turns out static bindings don't get shut down, so we need
testcontainers/testcontainers-rs#707 (comment)
unti testcontainers/testcontainers-rs#577.

Signed-off-by: lloydmeta <[email protected]>
lloydmeta added a commit to lloydmeta/miniaturs that referenced this issue Oct 31, 2024
Based on observation, localstack containers weren't being shutdown
properly.

It turns out static bindings don't get shut down, so we need
testcontainers/testcontainers-rs#707 (comment)
unti testcontainers/testcontainers-rs#577.

Signed-off-by: lloydmeta <[email protected]>
@lloydmeta
Copy link
Contributor

Thanks @symbx , I built on your hints and what @Tockra had in the original post and it works like a charm

A complete working example is at https://github.com/lloydmeta/miniaturs/blob/d244760f5039a15450f5d4566ffe52d19d427771/server/src/test_utils/mod.rs#L12-L113

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants