export SCRAM_ARCH=slc7_amd64_gcc700
cmsrel CMSSW_10_6_30
cd CMSSW_10_6_30/src/
cmsenv
For the following step you should have a ssh key associated to your GitHub account. For more information, see connecting-to-github-with-ssh-key.
git clone -b master [email protected]:Christinaw97/SUSYBSMAnalysis-HSCP.git SUSYBSMAnalysis
To compile the code, run
cd SUSYBSMAnalysis
scram b -j
Steps: | |
---|---|
Step 0 | Produce EDM-tuples from AOD |
Step 1 | Produce bare ROOT-tuples (histograms and trees) from step 0 |
Step 2 | Estimate Background using histograms from Step 1 |
Step 3 | Make plots |
Step 4 | Compute Limits |
Get the main script first
cp HSCP/test/HSCParticleProducer_cfg.py .
Have a look at HSCParticleProducer_cfg.py
to see all available options.
Get proxy
voms-proxy-init --voms cms -valid 192:00
Run locally
Before running locally:
#file exists? Note that site of type "TAPE" has no user access
dasgoclient -query="site file=/store/data/Run2017B/MET/AOD/09Aug2019_UL2017_rsb-v1/00000/AA1FC1E6-1E88-204D-B867-4637AEAC4BEA.root"
How to run
cmsRun HSCParticleProducer_cfg.py LUMITOPROCESS=HSCP/test/Cert_294927-306462_13TeV_UL2017_Collisions17_GoldenJSON.txt inputFiles=root://cms-xrd-global.cern.ch//store/data/Run2017B/MET/AOD/09Aug2019_UL2017_rsb-v1/00000/AA1FC1E6-1E88-204D-B867-4637AEAC4BEA.root
You can also use inputFiles_load=input.txt
, where input.txt
contains a list of files.
Don't forget to copy needed files:
cp HSCP/data/CorrFact*Pix*.txt .
cp HSCP/data/template*.root .
cp HSCP/data/MuonTimeOffset.txt .
Run on Grid using crab
Setup CRAB environment
source /cvmfs/cms.cern.ch/common/crab-setup.sh
Get CRAB configuration file to run on Data:
cp HSCP/test/submitToCrab.py .
python submitToCrab.py -h #for help
Important Replace config.Site.storageSite = 'T2_FR_IPHC'
with the site where you have permission to write.
You can check permission by running the following command: crab checkwrite --site=<site-name>
.
You can now run on Dataset (or input files)
python submitToCrab.py --dataset <Dataset> --name <request-name> --sample <isData> --lumiToProcess <JSON file>
The following directory will be created: crab_projects/crab_<request-name>
.
- To get status:
crab status -d crab_projects/crab_<request-name>
- To resubmit (killed and failed jobs):
crab resubmit -d crab_projects/crab_<request-name>
- To retrieve the output:
crab getoutput -d crab_projects/crab_<request-name> [--jobids id1,id2]
- To get report (processed lumi json):
crab report -d crab_projects/crab_<request-name>
- How to get corresponding integrated lumi, see section Compute Lumi.
Important Running crab with --dryrun
option provides
- jobs splitting
- estimates of runtime and memory consumption. To show only splitting results, add the
--skip-estimates
option. You can then runcrab proceed -d crab_projects/crab_<request-name>
.
Analysis Type: | |
---|---|
Type 0 | Tk only |
Type 1 | Tk+Muon |
Type 2 | Tk+TOF |
Type 3 | TOF only |
Type 4 | Q>1 |
Type 5 | Q>1 |
Main script: cp Analyzer/test/HSCParticleAnalyzer_cfg.py .
cmsRun HSCParticleAnalyzer_cfg.py inputFiles=file:HSCP.root maxEvents=100
Or, if there are many files:
ls HSCP*.root|sed 's/^/file:/'>list.txt
cmsRun HSCParticleAnalyzer_cfg.py inputFiles_load=list.txt
Main scripts:
cp Analyzer/test/HSCParticleProducerAnalyzer_cfg.py .
cp Analyzer/test/crabConfigProdAnalyzer_Data.py .
#just like previously...
cmsRun HSCParticleProducerAnalyzer_cfg.py LUMITOPROCESS=HSCP/test/Cert_294927-306462_13TeV_UL2017_Collisions17_GoldenJSON.txt inputFiles=root://cms-xrd-global.cern.ch//store/data/Run2017B/MET/AOD/09Aug2019_UL2017_rsb-v1/00000/AA1FC1E6-1E88-204D-B867-4637AEAC4BEA.root
To use crab:
python crabConfigProdAnalyzer_Data.py
Copy the script:
cp Analyzer/test/compareRootFiles.py .
For more details, execute the following python compareRootFiles.py -h
.
This script takes two root files (to set in the file) and compares their histograms with a Kolmogorov test. Any difference is saved in:
Analyzer/test/differences.txt
List your root files in a single text file, e.g input.txt
BackgroundPrediction -h # for help
BackgroundPrediction -f input.txt
From a new terminal
- Setup the CMS environment
- Install brilws (in your home directory) as follows
Some tests
export PATH=$HOME/.local/bin:/cvmfs/cms-bril.cern.ch/brilconda/bin:$PATH pip install --user brilws
brilcalc --version # to check the installation brilcalc lumi --help # for help
- Compute lumi
brilcalc lumi -c web -i Cert_294927-306462_13TeV_UL2017_Collisions17_GoldenJSON.txt --output-style csv -u /pb > result.txt