-
Notifications
You must be signed in to change notification settings - Fork 3
Home
Most all of the generation script is streamlined so you only have to setup for each new directory you want using the NormalSetup.csh
script, and run over the grid using the master.sh
script. To learn more about the different files, read the descriptions.
check_run_status.sh
deleteLocalLogFileDirectories.csh
get_files.sh
master.sh
NormalSetup.csh
addingRoot_recursive.sh
addingRoot.sh
deleteEOSAnalysisRootFiles.csh
tntAnalyze.sh
This files serves two functions: check to see if jobs passed, ie there were no errors after the job finished and to give the cut tables for each set of samples.
The first function is to simply go through the directory corresponding to a sample and see if any of the *.stderr
files have a non-zero size. If they do, the script tells you the sample has an error.
The second part looks through all of the *stdout
files and adds up the values in all in the cut table and prints it off in one clean cut table. This doesn't include weights, but it can give you a quick idea of the cut efficiency on different samples without having to open up root.
This file assumes condor returns and output file and an error file, so be sure in condor_default.cmd
, both are returned and have the file tags mentioned above.
This file is prompted to run automatically when master.sh
is started. When asked if one wants to delete the old files, this and deleteEOSAnalysisRootFiles.csh` are called. This file simple deletes the directories made that hold all of the files returned by condor such as the stdout, stderr, and log files.
NOTE: this script works by deleting all directories that are not necessary for the program, ie /defaults
and /list_Samples
. If you create another directory and run this file, the directory will be deleted unless an extra exception is added for said directory.
This script is automatically run when NormalSetup.csh
is run. This script scrapes the EOS area where the BSM3G group keeps their ntuples, ie /ra2tau/jan2017tuple
and makes the list of samples in the user's local list_Samples
directory. This means the user can update their ntuples quickly to the current with this program. Since it is run every time NormalSetup.csh
is run, it ensure the user always has the latest files
As new ntuples are made for different runs, this file will change to accommodate the new area to scrape, but this will be updated for newer versions.
This is the primary files that is used since it sends the jobs to be run. There are two main variables that can be changed in the file:
- limit - The number of jobs the program will allow to be send to the grid. This is to stop thousands of jobs being send all at once, especially since the LPC team doesn't want the grid to be flooded.
- runfile - the name of the file to be run. Allows for a different run file to be used so the same framework is available. Please note though, this is not advised in the codes current state since it is optimized for running just the Analyzer.
The master file sends the jobs to the grid up to the limit set, continually adding jobs as space opens up from finished jobs.
After the jobs are all submitted, the program waits for the jobs to finish. This is noted by little .
printed off every minute. After all of the jobs have finished and if all of the jobs finished successfully (ie, all stderr files are empty), then addingRoot_recursively.sh
is called and the files are added together.
This means all of the work for running over the files and prepping them for the Plotter is taken care of. Sections are divided up, so if some functionality is not wanted, it is easy to comment it out and run to one's own preference.
This is the first file the user should run to set up the run for sending jobs to the grid.
The file first gets all of the default files and makes then usable for this run. As in this wiki page, the files are divided into files before and after NormalSetup.csh
, meaning the files that work anonymously and files that need to information to run that is given in the running of this file. Any time a new area or new information is needed, to run the files marked "After NormalSetup.csh
cannot be run until the NormalSetup.csh
file is run again.
The second thing done is to set up the EOS area. EOS can be very picky, so all of the possible samples have directories made in the new EOS area to allow for a smooth run. This is done by the helper script /default/makeEOSdirectories_default.csh
.
The last thing the program does is link the Analyzer
to the master file. If one has different, modified Analyzer
files, this allows for setting which Analyzer
to use. The only stipulation is that the Analyzer directory must be above the Generation scripts in the tree structure. The program also allows you to pick which set of config files for the Analyzer one wants to use.
If any of these things change for trying to send files to the grid, make sure you re-run the NormalSetup.csh
file.
This is the program that adds the EOS files together and creates the collective Sample output. This works using condor to speed up the adding of root files together. It works by adding the files in a tree-like way, adding a chuck together then adding the chucks together to make new chucks, and so on.
The number of files that are added in each chuck is determined by the variable magicNumber
. This number can be optimized for fasted adding time, but the process only takes a minute, so messing around with this may have very little gain overall.
This is the older version of the addingRoot_recursive.sh
file. It works by simple hadding all of the files in a row. Because it doesn't utilize the grid, it is MUCH slower than the recursive algorithm. It is kept in case of problems with the grid and/or the addingRoot_recursive.sh
file.
This file is similar to deleteLocalLogFileDirectories.csh
; in fact, it is prompted to be run automatically at the start of master.sh
, similar to deleteLocalLogFileDirectories.csh
.
This file deletes all of the root files stored in the EOS area. The programs run by running the Analzyer over different Sample nTuples and saving these in the EOS area specified in NormalSetup.csh
. These files are normally hadd-ed together immediately after and thus, the old Analyzer output files aren't needed anymore. But, EOS will not replace files, so the old files need to be deleted, thus, deleteEOSAnalysisRootFiles.csh
does this for the user.
Since the whole Analysis process only takes a few minutes, it is less advisable to save old, un-hadd-ed files together, but if one wants to, these files can be put into a seperate directory area, or NormalSetup.csh
can be run again to give a fresh area for the Analysis to happen.
This is the file used by condor to run the Analyzer
. It simply copies the Analyzer files to the condor area, runs the Analyzer
, and copies the output to the specified EOS area. All of the output and errors from this file are sent back as *.stdout
and *.stderr
files.