-
Notifications
You must be signed in to change notification settings - Fork 48
feat: Docker compose support #3872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@hpohekar Shall we roll this out as a full feature replacing the usage of the Python docker SDK for container launch mode? |
@mkundu1 Yes, sure. We need to install Podman on all 3 runners and use an env var to select Podman as a compose source for selective tests. |
Should test it only in a github-runner, better to avoid any change in the self-hosted runners. |
closes #3620
Introduction
Project Name: Fluent Docker and Podman compose
Overview
Docker and Podman compose support same compose.yaml file format to launch Fluent in container mode. This document outlines the technical design for the Fluent compose application.
Problem Statement
Current Fluent container mode supports Docker only. Compose project aims to support both Docker and Podman using same compose file.
Goals and Objectives
Support both Docker and Podman for Fluent container launch mode.
Provide more robust control on container handling.
Ensure scalability to support multiple simultaneous instances of Fluent containers.
Compose Overview
It is a tool for defining and running multi-container applications. We use a compose.yaml file to describe the services and volumes required for an application. Using a single command
docker compose up
orpodman compose up
, you can start all services defined in the YAML file.Design Considerations
Assumptions: Users have local installation of Docker and Podman.
Dependencies: Docker, Podman.
Architecture Design: PyFluent Container Launch Workflow
This architecture enables PyFluent to launch an Ansys Fluent session inside a container (Docker or Podman) based on environment configuration. The workflow dynamically generates a Compose file as text, launches the container using the available engine, and connects to the running Fluent session via a server info file shared between the host and container.
Architecture Diagram (Textual)
Workflow Steps
The container engine starts the Fluent container as per the Compose spec.
The container mounts a host directory (mount source) to a target path inside the container (mount target).
PyFluent, running on the host, reads the server info file from the mount source.
PyFluent uses this info to connect to the running Fluent session and returns the session object to the user.
Key Design Features
Dynamic Compose Input: No physical Compose file is written; YAML is passed as text directly to the container engine.
Engine Agnostic: PyFluent detects and uses either Docker or Podman, depending on availability.
Mount Sharing: Host and container share a directory for server info file exchange, enabling seamless connection setup.
Session Management: PyFluent connects to Fluent using details from the server info file, abstracting away containerization from the user.
API Design docker_compose.py module
launcher = ComposeLauncher(container_dict)
launcher.check_image_exists()
– Check if a Docker image exists locallylauncher.pull_image()
- Pull a Docker image if it does not exist locally.launcher.start()
- Start the services.launcher.stop()
- Stop the services.launcher.ports
- Return the ports of the launched services.End-to-end tests using Pytest
Verified on both Windows and Linux, Docker and Podman works fine.
CI/CD pipeline already integrated for Docker in the GitHub.
Added timeout in
subprocess.communicate()
andsubprocess.wait()
.Used a mechanism to remove the following docker services:
- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
Summary
This architecture allows PyFluent to flexibly launch Fluent in a containerized environment using either Docker or Podman, with all orchestration handled programmatically and connection to the running session managed via a shared server info file. The process is fully automated and requires minimal user intervention beyond setting the appropriate environment variable and, optionally, mount paths