Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ci] remove Docker volumes during Azure cleanup #6760

Merged
merged 1 commit into from
Dec 15, 2024
Merged

Conversation

StrikerRUS
Copy link
Collaborator

I believe we don't need preserving anonymous volumes during cleanup.

Refer to https://depot.dev/blog/docker-clear-cache#removing-everything-with-docker-system-prune and https://docs.docker.com/reference/cli/docker/system/prune/.

By default, volumes aren't removed to prevent important data from being deleted if there is currently no container using the volume. Use the --volumes flag when running the command to prune anonymous volumes as well:

Copy link
Collaborator

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you, the way we use this box, those volumes can all be safely removed. And that will free up a bit of disk space!

Thanks for thinking of this.

@jameslamb jameslamb merged commit 31205fc into master Dec 15, 2024
48 checks passed
@jameslamb jameslamb deleted the ci/docker-cleanup branch December 15, 2024 18:24
@StrikerRUS
Copy link
Collaborator Author

Thanks for thinking of this.

Thanks to this warning, I started looking into this 😄

image

@jameslamb
Copy link
Collaborator

Thanks for that!

For the record, I think it could be ok for almost all memory on the runner to be occupied. As long as jobs are not failing for out-of-memory issues, fully utilizing the resources of these runners that only exist for this project and only run one workload at a time is not a problem.

Notice that they're all QEMU_multiarch bdist... it would be great if we could get a small aarch64 Linux runner (or set of them) to do builds/tests and not have to use emulation. That'd make those jobs much much faster, reduce maintenance burden here, and I bet that a same-sized machine wouldn't exhibit these high-memory-usage warnings.

@shiyu1994 could Microsoft help us with an aarch64 Linux runner? It doesn't need to have GPUs or even much memory or CPU. The same disk/memory/CPU as the other Azure DevOps Linux runners we have would be fine.

@shiyu1994
Copy link
Collaborator

Thanks for that!

For the record, I think it could be ok for almost all memory on the runner to be occupied. As long as jobs are not failing for out-of-memory issues, fully utilizing the resources of these runners that only exist for this project and only run one workload at a time is not a problem.

Notice that they're all QEMU_multiarch bdist... it would be great if we could get a small aarch64 Linux runner (or set of them) to do builds/tests and not have to use emulation. That'd make those jobs much much faster, reduce maintenance burden here, and I bet that a same-sized machine wouldn't exhibit these high-memory-usage warnings.

@shiyu1994 could Microsoft help us with an aarch64 Linux runner? It doesn't need to have GPUs or even much memory or CPU. The same disk/memory/CPU as the other Azure DevOps Linux runners we have would be fine.

I can try to allocate a new agent with aarch64. It should not be expensive.

@jameslamb
Copy link
Collaborator

Thank you so much!

@StrikerRUS
Copy link
Collaborator Author

@jameslamb @shiyu1994

I can try to allocate a new agent with aarch64. It should not be expensive.

Now we can simply use free ones. But they are for GitHub Actions, not for Azure, unfortunately.

https://github.blog/changelog/2025-01-16-linux-arm64-hosted-runners-now-available-for-free-in-public-repositories-public-preview/

@jameslamb
Copy link
Collaborator

Hey great!!! Thanks for the link... put up #6788 to discuss this topic more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants