-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace Docker-in-Docker setup with multi-stage build. #86
Conversation
564fd6e
to
6ec2024
Compare
Docker-in-docker was being used as a way to produce artifacts in one docker build and use them as part of the build of another image, using different base images. This "builder pattern" was pretty common in the mid-2010s, but has been suplanted by multi-stage builds instead, which reduces a lot of the complexity. See https://www.docker.com/blog/multi-stage-builds/ for the announcement of that feature, and https://blog.alexellis.io/mutli-stage-docker-builds/ for some of the discussion at that time.
8ca07e9
to
e298891
Compare
e298891
to
0737d37
Compare
@@ -7,7 +7,7 @@ x-logging: &log-journald | |||
|
|||
services: | |||
api: | |||
image: ${ORG}/montagu-api:master | |||
image: ${ORG}/montagu-api:abf39a2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is currently pointed at vimc/montagu-api#479, I'll update once that PR is merged.
This has ended up being a mix of fixes to get the CI working again and cleanups to make it easier to run the build locally. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great - yes, I think we might already be using multi-stage builds in some projects.
There's still a line in the README (under "Buildkite" about making a shared build env image).
When I run ./scripts/dev.sh
it fails to start in a way which looks like it's related to container naming again:
Container montagu-db-1 Started 0.8s
✔ Container montagu-static-1 Started 0.7s
✔ **Container montagu-api-1** Started 0.8s
✔ Container montagu-admin-1 Started 1.0s
✔ Container montagu-contrib-1 Started 1.2s
✔ Container montagu-orderly_web_web-1 Started 1.2s
+ docker exec **montagu_api_1** mkdir -p /etc/montagu/api/
Error response from daemon: No such container: montagu_api_1
I'm running with docker compose v2.31.0
1. `./scripts/run-integration-tests.sh`: runs the app image created in the previous step along with all dependencies and | ||
then runs the integration tests image created in step 2. | ||
1. `./dev/run-build-minimal.sh`: builds an image `montagu-reverse-proxy-minimal` that just provides login functionality. | ||
1. `./dev/build-minimal-image.sh`: builds an image `montagu-reverse-proxy-minimal` that just provides login functionality. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will it? build-minimal-image.sh
references the .
Dockerfile, which is provided in the dev
folder along with the script, but if you're running that from the project root folder, won't it use the Dockerfile at the root? Might be better to give the dev (minimal) dockerfile a more explicit name like buildMinimal.dockerfile
like it used to have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like I just naively copy pasted this. Renamed to dev/minimal.dockerfile
, and added the appropriate -f
flag.
README.md
Outdated
@@ -34,8 +34,7 @@ Run unit tests with `npm run test`. Jest will pick up test in files with the `.t | |||
|
|||
To run integration tests: | |||
|
|||
1. First install chromedriver `sudo ./scripts/install-chromedriver.sh` | |||
1. Make sure you also have a compatible version of chrome (110) | |||
1. Make sure have chrome and its matching chromedriver installed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove the instruction to run the install-chromedriver script here? It's not being used in the integration test image so presumably it's still there as a local convenience?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had actually intended to remove that file altogether and forgot. Having a script that just randomly pops files into /usr/bin
didn't seem great.
snap install chromium
seems to install both the browser and the driver, so I'll just add that instead.
Yeah, for this to work you need to set I've actually fixed this separately in #88, since it is needed to get the nginx config to work under constellation. |
Fixed. Thanks |
README.md
Outdated
1. Make sure you have Chrome (or Chromium) installed. Depending on the platform you may also need to install chromedriver. | ||
- On Ubuntu, `sudo snap install chromium` will install both. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This worked but I found I still had to add the installed chromedriver to my path for the integration tests to run - it gets installed to /snap/chromium
. Might be worth mentioning that.
One of the selenium tests still failed, but it had passed on BuildKite so I'm not going to worry about it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out that was working for me because I had the chromium-chromedriver
apt package, which installs the snap but also adds extra binaries in the PATH. I've switched the instructions to use that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also it seems that recent Selenium versions get rid of all that complexity and do it all for us: https://www.selenium.dev/blog/2022/introducing-selenium-manager/
Docker-in-docker was being used as a way to produce artifacts in one docker build and use them as part of the build of another image, using different base images.
This "builder pattern" was pretty common in the mid-2010s, but has been suplanted by multi-stage builds instead, which reduces a lot of the complexity. See https://www.docker.com/blog/multi-stage-builds/ for the announcement of that feature, and
https://blog.alexellis.io/mutli-stage-docker-builds/ for some of the discussion at that time.