Skip to content
Open
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 68 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
2. [Data production](#dataprod)
1. [Skimming](#skim)
2. [Data sources](#sources)
3. [Job Submission](#job-submission)
3. [Reconstruction Chain](#org0bc224d)
1. [Cluster Size Studies](#orgc33e2a6)
4. [Event Visualization](#org44a4071)
Expand Down Expand Up @@ -99,6 +100,73 @@ This framework relies on photon-, electron- and pion-gun samples produced via CR

The `PU0` files above were merged and are stored under `/data_CMS/cms/alves/L1HGCAL/`, accessible to LLR users and under `/eos/user/b/bfontana/FPGAs/new_algos/`, accessible to all lxplus and LLR users. The latter is used since it is well interfaced with CERN services. The `PU200` files were merged and stored under `/eos/user/i/iehle/data/PU200/<particle>/`.

<a id="job-submission"></a>
## Job Submission
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could say explicitly that the Job submission can be use for multiple purposes, and it can be a good alternative to the local Skimming procedure presented in this README, just a few lines above.

About this, I was trying to run produce.py, but I got the following error while trying to read the input file for the skimming, specified in the config.yaml file:

Error in <TFile::TFile>: file /eos/user/b/bfontana/FPGAs/new_algos/photons_0PU_bc_stc_hadd.root does not exist

Everything is good if I try to run it in local. Do you know if we should add some additional configuration options to avoid this kind of access problems?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes yes I'll mention the skimming procedure application too!

As for running it on EOS, have you confirmed that you have access to /eos/user/b/bfontana/FPGAs/new_algos/photons_0PU_bc_stc_hadd.root just inside of your terminal? I do indeed remember using files on EOS being a major headache, however. I'll look into the problem further, but for now can you see if adding /opt/exp_soft/cms/t3/eos-login -username $USER -init to your script fixes the issue? Obviously this assumes that you have the same LLR/CERN username, which I don't, so if it's not replace $USER directly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes from my terminal I have access to this file. I will try to run it adding the script you suggested directly in the script and I will let you know.


Job submission to HT Condor is handled through `bye_splits/production/submit_scripts/job_submit.py` using the `job` section of `config.yaml` for its configuration. The configuration should include usual condor variables, i.e `user`, `proxy`, `queue`, and `local`, as well as a path to the `script` you would like to run on condor. The `arguments` sub-section should contain `key/value` pairs matching the expected arguments that `script` accepts. The variable that you would like to iterate over should be set in `iterOver` and its value should correspond to a `key` in the `arguments` sub-section whose value is a list containing the values the script should iterate over. It then contains a section for each particle type which should contain a `submit_dir`, i.e. the directory in which to read and write submission related files, and `args_per_batch` which can be any number between 1 and `len(arguments[<iterOver>])`. An example of the `job` configuration settings is as such:

job:
user: iehle
proxy: ~/.t3/proxy.cert
queue: short
local: False
script: /grid_mnt/vol_home/llr/cms/ehle/NewRepos/bye_splits/bye_splits/production/submit_scripts/condor_cluster_size.sh
iterOver: radius
arguments:
radius: [0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.007, 0.008, 0.009,
0.01 , 0.011, 0.012, 0.013, 0.014, 0.015, 0.016, 0.017, 0.018,
0.019, 0.02 , 0.021, 0.022, 0.023, 0.024, 0.025, 0.026, 0.027,
0.028, 0.029, 0.03 , 0.031, 0.032, 0.033, 0.034, 0.035, 0.036,
0.037, 0.038, 0.039, 0.04 , 0.041, 0.042, 0.043, 0.044, 0.045,
0.046, 0.047, 0.048, 0.049, 0.05]
particles: photons
pileup: PU0
photons:
submit_dir: /data_CMS/cms/ehle/L1HGCAL/PU0/photons/
args_per_batch: 10
electrons:
submit_dir: /data_CMS/cms/ehle/L1HGCAL/PU0/electrons/
args_per_batch: 10
pions:
submit_dir: /data_CMS/cms/ehle/L1HGCAL/PU0/pions/
args_per_batch: 10

After setting the configuration variables, the jobs are created and launched via

python bye_splits/production/submit_scripts/job_submit.py

and will produce both the executable `.sh` file which will have a form like

#!/usr/bin/env bash
workdir=/grid_mnt/vol_home/llr/cms/ehle/NewRepos/bye_splits/bye_splits/production/submit_scripts
cd $workdir
export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch
export SITECONFIG_PATH=$VO_CMS_SW_DIR/SITECONF/T2_FR_GRIF_LLR/GRIF-LLR/
source $VO_CMS_SW_DIR/cmsset_default.sh
bash /grid_mnt/vol_home/llr/cms/ehle/NewRepos/bye_splits/bye_splits/production/submit_scripts/condor_cluster_size.sh --radius $1 --particles $2 --pileup $3

and the `.sub` file submitted to HT Condor:

executable = /data_CMS/cms/ehle/L1HGCAL/PU0/photons/subs/condor_cluster_size_submit_v1.sh
Universe = vanilla
Arguments = $(radius) $(particles) $(pileup)
output = /data_CMS/cms/ehle/L1HGCAL/PU0/photons/logs/condor_cluster_size_C$(Cluster)P$(Process).out
error = /data_CMS/cms/ehle/L1HGCAL/PU0/photons/logs/condor_cluster_size_C$(Cluster)P$(Process).err
log = /data_CMS/cms/ehle/L1HGCAL/PU0/photons/logs/condor_cluster_size_C$(Cluster)P$(Process).log
getenv = true
T3Queue = short
WNTag = el7
+SingularityCmd = ""
include: /opt/exp_soft/cms/t3/t3queue |
queue radius, particles, pileup from (
[0.001;0.002;0.003;0.004;0.005;0.006;0.007;0.008;0.009;0.01], photons, PU0
[0.011;0.012;0.013;0.014;0.015;0.016;0.017;0.018;0.019;0.02], photons, PU0
[0.021;0.022;0.023;0.024;0.025;0.026;0.027;0.028;0.029;0.03], photons, PU0
[0.031;0.032;0.033;0.034;0.035;0.036;0.037;0.038;0.039;0.04], photons, PU0
[0.041;0.042;0.043;0.044;0.045;0.046;0.047;0.048;0.049;0.05], photons, PU0
)

This describes just one example, and other uses can be easily implemented. You may, for example, run the [skimming procedure](#skimming) on HT Condor with a small bash script that accepts `--particles` and `--nevents` as arguments, and update the configuration variables accordingly.

<a id="org0bc224d"></a>
# Reconstruction Chain
Expand Down
94 changes: 0 additions & 94 deletions bye_splits/production/produce.cc

This file was deleted.

31 changes: 31 additions & 0 deletions bye_splits/production/submit_scripts/condor_cluster_size.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
#!/usr/bin/env bash

cd ${HOME}/bye_splits/bye_splits/scripts/cluster_size/condor/

radius=()
particles=""
pileup=""

while [[ "$#" -gt 0 ]]; do
case "$1" in
--radius)
IFS=";" read -ra radius <<< "${2:1:-1}"
shift 2
;;
--particles)
particles="$2"
shift 2
;;
--pileup)
pileup="$2"
shift 2
;;
*)
echo "Unrecognized argument $1"
exit 1;;
esac
done

for rad in ${radius[@]}; do
python run_cluster.py --radius "$rad" --particles "$particles" --pileup "$pileup"
done
Loading