You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
test timing of different steps (select datasets that have the LHE step, see mcm_store )
note for example that the provenance information is the same for Nano and Mini datasets, but dataset_records.py builds that same information two times, once for Nano and then for Mini
currently the scripts expect Mini to have a corresponding Nano. Take care of the cases where Mini do not have Nano. Probably best to build a Mini-Nano correspondence map at an early stage so that it does not need to be done everytime. You could store it in inputs (similar to recid_info.py).
mcm cache is done only for Nano, it could be done for those Mini that do not have Nano
this could be done once only at the very beginning of the script chain
The methodology (get_all_generator_text) is the time-consuming part in teh dataset record building and it is exactly the same for mini and nano. Check if the related nano dataset record has already been built take the full methodology from there.
README.md
Update README.md to describe the configurable threading
The text was updated successfully, but these errors were encountered:
Create a new directory
cms-2016-simulated-datasets
withinputs
get the input files, Mini and Nano separately with
an empty placeholder file doi-sim.txt
code
lhe_generators.py
so that it is integrated withinterface.py
in a similar way as the other scripts--ignore-eos-store
) when neededinputs
(similar torecid_info.py
).get_all_generator_text
) is the time-consuming part in teh dataset record building and it is exactly the same for mini and nano. Check if the related nano dataset record has already been built take the full methodology from there.README.md
The text was updated successfully, but these errors were encountered: