Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Singleton instace of a container #354

Closed
soulseekeer24 opened this issue Apr 10, 2022 · 16 comments · Fixed by #384
Closed

Singleton instace of a container #354

soulseekeer24 opened this issue Apr 10, 2022 · 16 comments · Fixed by #384

Comments

@soulseekeer24
Copy link

Currentlty i am runig multiple e2e test that make use of some docker images, the problem is that each test is starting his own containers so i ask if there is a way in the curret api to share the containers

@soulseekeer24 soulseekeer24 changed the title Singleto instace of a container Singleton instace of a container Apr 10, 2022
@thomaseizinger
Copy link
Collaborator

The whole idea of testcontainers is to achieve test isolation so no, there is no explicit way of sharing a container. You can achieve it via lazy_static if you really want to!

@yatesco
Copy link

yatesco commented Jun 8, 2022

hi @thomaseizinger - I tried to do this (as an MSSQL container takes a few seconds to start and is memory expensive) but I ran into issues because the docker client itself isn't Sync, so the following doesn't compile:

 lazy_static! {
         static ref DOCKER: clients::Cli = clients::Cli::default();
         static ref NODE: Container<'static, Mssql> = DOCKER.run(Mssql::default());
         static ref POOL: Pool<TiberiusConnectionManager> = {
             tokio::runtime::Runtime::new().unwrap().block_on(async {
                 dbpool::spin_up_database_with_node(&NODE)
                     .await
                     .expect("cannot create MSSQL database");
             })
         };
     }

gives:

error[E0277]: `(dyn core::container::Docker + 'static)` cannot be shared between threads safely
   --> integration/src/integration/mod.rs:104:5
    |
104 | /     lazy_static! {
105 | |         static ref DOCKER: clients::Cli = clients::Cli::default();
106 | |         static ref NODE: Container<'static, Mssql> = DOCKER.run(Mssql::default());
107 | |         static ref POOL: Pool<TiberiusConnectionManager> = {
...   |
113 | |         };
114 | |     }
    | |_____^ `(dyn core::container::Docker + 'static)` cannot be shared between threads safely
    |
    = help: the trait `Sync` is not implemented for `(dyn core::container::Docker + 'static)`
    = note: required because of the requirements on the impl of `Sync` for `Unique<(dyn core::container::Docker + 'static)>`
    = note: required because it appears within the type `Box<(dyn core::container::Docker + 'static)>`
    = note: required because it appears within the type `Container<'static, Mssql>`
note: required by a bound in `lazy_static::lazy::Lazy`
   --> /Users/coliny/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/inline_lazy.rs:19:20
    |
19  | pub struct Lazy<T: Sync>(Cell<Option<T>>, Once);
    |                    ^^^^ required by this bound in `lazy_static::lazy::Lazy`
    = note: this error originates in the macro `__lazy_static_create` (in Nightly builds, run with -Z macro-backtrace for more info)

To be clear - each test is run in isolation thanks to https://docs.rs/serial_test/latest/serial_test/

Do you have any suggestions?

@thomaseizinger
Copy link
Collaborator

The issue itself can probably be fixed (need to add the Sync bound to the trait object) but what you are trying to do is a bit odd.

Testcontainers is designed to achieve test isolation.

If you want to have a single container for all tests, why not write yourself a shell script that starts the container, runs all tests, exposes a few env vars for connecting and shuts it down again? Seems unnecessary to use a tool like testcontainers here.

Containers can be expensive but then I'd recommend to write fewer tests that depend on containers and not break test isolation by reusing a container for multiple tests.

This is a usecase we are unlikely to support well.

@yatesco
Copy link

yatesco commented Jun 8, 2022

thanks @thomaseizinger. You make good points. A persistent container with tests isolating by individual schemas might scale better here then.

(I'll have hundreds of these tests you see, and serialising them is killing performance, but so is spinning up a new MSSQL container).

@thomaseizinger
Copy link
Collaborator

(I'll have hundreds of these tests you see, and serialising them is killing performance, but so is spinning up a new MSSQL container).

Having hundreds of container-dependent tests seems excessive. What is your architecture/design if I may ask?

Without knowing much about the problem you are solving, this sounds like there is too much coupling of "logic worth testing" and the integration with MSSQL.

@yatesco
Copy link

yatesco commented Jun 8, 2022

sure - it's for a large data intensive Analytical tool with a custom query engine. So the "logic" is all about the database ;-)

I certainly could relax my testing strategy from "one fact per test" and have larger tests, that would also reduce the required number of schemas.

@yatesco
Copy link

yatesco commented Jun 8, 2022

as a concrete example. I have a set of utilities, one of which is is_table_exists. That right there has two distinct tests with two distinct containers, one where the table exists and one where the table doesn't exist. They can easily and validly be rolled into a single test, even if it will keep me awake at night a little ;-)

@thomaseizinger
Copy link
Collaborator

But the custom query engine produces SQL under the hood?

Perhaps a good enough testing strategy would be to make sure that the SQL generated for is_table_exists doesn't change?

In other words, you could look at the problem as a "compile custom queries to SQL" which can be tested in a more composable way without running the final queries.

@yatesco
Copy link

yatesco commented Jun 8, 2022

I've been down that road before, and there is a place for it, but you have the risk of "coupling your tests against your implementation".

But this is just one example, we also have our own "versioning" components (similar to flyway) so our software intelligently updates itself, an in-house MSSQL backed event queue, reporting, and so on, let alone the actual component itself. In comparison there are 100,000s of (actual) unit tests - this is a pretty big beasty.

@thomaseizinger
Copy link
Collaborator

Tricky!

Well, the actual issue you posted (the missing Sync bound) can be fixed I think. Just a matter of adding + Sync to the trait object. Let me know if you are interested in PR-ing this!

@ufoscout
Copy link
Contributor

ufoscout commented Jul 6, 2022

@thomaseizinger
Same problem here, we spin up a single database instance with testcontainers-rs and all integration tests are executed against it. We don't need to run those tests in isolation as we do for our unit tests. In addition, starting a new container for each test is too slow.
Indeed, we could use an external script for starting the DB before the tests, but, having everything done in rust is very practical.
We are currently blocked on testcontainers-rs 0.12 which is the last release without the "Send" issue.
I attempted a fix to the issue in PR #384

bors bot added a commit that referenced this issue Jul 19, 2022
384: Make Docker Send and Sync r=thomaseizinger a=ufoscout

This fixes #354 allowing singleton instances of a container.
Fixes  #381 also

Co-authored-by: Francesco_Cina <[email protected]>
Co-authored-by: Francesco Cinà <[email protected]>
@sillen102
Copy link

sillen102 commented Oct 7, 2022

@ufoscout How can I use the feature?

@sillen102
Copy link

Ok so what I had to do was specify in cargo to use the GitHub repo instead of a specific version (perhaps it's time for a 0.15 or 0.2 since it's a new feature?).

However the image isn't killed after tests. What would be even better is if I could specify that I would like to reuse the image if it's already running. I've used testcontainers in Java for years and in the Java implementation there is flag that you can set called "reuse" where the container will be reused between runs. That's a nice way to setup tests that are depending on running a database or Elasticsearch (which takes forever to startup).

I believe the way it's implemented is by generating a hash at start of a container with all the settings which is then used on the next run where testcontainers can check if a container matching the hash is already running. In that case it will just reuse it instead of spinning up a new one.

I'm currently using a database which is pretty quick to start so for me it would be enough to just kill the container after all tests are done but it would have been a nice feature to be able to reuse containers between test runs.

So how can I issue the image to be killed after the tests are done?

@thomaseizinger
Copy link
Collaborator

Ok so what I had to do was specify in cargo to use the GitHub repo instead of a specific version (perhaps it's time for a 0.15 or 0.2 since it's a new feature?).

Next release would be 0.15 yes. I am not sure what you mean by 0.2? 0.2 predates 0.14 as a semver number.

However the image isn't killed after tests. What would be even better is if I could specify that I would like to reuse the image if it's already running. I've used testcontainers in Java for years and in the Java implementation there is flag that you can set called "reuse" where the container will be reused between runs. That's a nice way to setup tests that are depending on running a database or Elasticsearch (which takes forever to startup).

I believe the way it's implemented is by generating a hash at start of a container with all the settings which is then used on the next run where testcontainers can check if a container matching the hash is already running. In that case it will just reuse it instead of spinning up a new one.

Testcontainers for Java is a lot more mature than the Rust implementation. PRs are always welcome for implementing new features though!

I'm currently using a database which is pretty quick to start so for me it would be enough to just kill the container after all tests are done but it would have been a nice feature to be able to reuse containers between test runs.

So how can I issue the image to be killed after the tests are done?

The container will be killed once the variable is dropped. Is the test process completely stopped?

@Tockra
Copy link

Tockra commented Jul 24, 2024

Testcontainers is designed to achieve test isolation.

Sorry, I know this issue is already closed, but you mentioned multiple time, that to have a single container for multiple tests is not the purpose of test containers.
What is your source?
If this would be really a "odd" test strategy, why this guide is on the official testcontainers website?:
https://testcontainers.com/guides/testcontainers-container-lifecycle/

I'm pretty sure to start docker containers only one time per test suite is a common pattern. Only in rust it is like rocket science to do that. In java its easy peasy.

Containers can be expensive but then I'd recommend to write fewer tests that depend on containers and not break test isolation by reusing a container for multiple tests.

Yes, just stop testing your application. This seems to be a really good idea...
But sadly in the most cases you want exactly to test what you want to test. Maybe sometimes you have other ways to archive your goal, but if you want to test your application with real external systems and you don't want to provide a always running test system or you don't want to split your test setup into a script part and a rust part, you just need a singleton docker container. Otherwise you have multiple hundred times the expensive startup time of you system .

@DDtKey
Copy link
Collaborator

DDtKey commented Jul 24, 2024

In fact, this is old discussion and the state has already changed since then (and the maintainers).

Java & Go use the resource reaper(testcontainers/ryuk) and we are also planning to support it. In addition, even now there are workarounds without “rocket science” (but with certain crutches)

This specific mostly come from the way the tests are run with cargo. It’s much easier to have custom cleanup/teardown with a custom test harness (see for example this).

But for now, there are two relevant issues for this problem:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants