For this project, a system was built to analyse atlas experiment data from the large hadron collider.
The system consists of three components:
Sends file paths to the worker nodes. Then, gathers back data from the workers and combines it all into one plot.
Carries out the actual processing on experimental and monte carlo data.
Allows the controller and worker(s) to communicate with eachother.
docker compose up --scale worker=N
Where N is the desired number of workers.
Alternatively, a docker swarm service can be constructed using stack.yml.
This project is based off a pre-existing atlas jupyter notebook: https://github.com/atlas-outreach-data-tools/notebooks-collection-opendata/blob/master/13-TeV-examples/uproot_python/HZZAnalysis.ipynb