-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HGCal] first version of the HGCalValidator #26099
Conversation
The code-checks are being triggered in jenkins. |
+code-checks Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-26099/8670
|
A new Pull Request was created by @apsallid for master. It involves the following packages: DataFormats/EgammaReco @perrotta, @andrius-k, @kmaeshima, @schneiml, @civanch, @kpedro88, @cmsbuild, @jfernan2, @fioriNTU, @slava77, @mdhildreth can you please review it and eventually sign? Thanks. cms-bot commands are listed here |
please test |
The tests are being triggered in jenkins. |
@@ -668,3 +675,20 @@ void HGCalImagingAlgo::computeThreshold() { | |||
} | |||
|
|||
} | |||
|
|||
void HGCalImagingAlgo::setDensity(std::vector<KDNode> &nd){ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
input can be const
@@ -195,6 +213,9 @@ hgcal::RecHitTools rhtools_; | |||
// The algo id | |||
reco::CaloCluster::AlgoId algoId_; | |||
|
|||
// For keeping the density per hit | |||
std::map< DetId, float > density_v; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use typedef here
@@ -168,7 +170,8 @@ void HGCalLayerClusterProducer::produce(edm::Event& evt, | |||
|
|||
std::unique_ptr<std::vector<reco::BasicCluster> > clusters( new std::vector<reco::BasicCluster> ), | |||
clusters_sharing( new std::vector<reco::BasicCluster> ); | |||
|
|||
std::unique_ptr<Density> density(new Density); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make_unique
(don't use new
)
#The cumulative material budget in front of each layer. To be more specific, it | ||
#is the material budget just in front of the active material (not including it). | ||
#This file is created using the official material budget code. | ||
cummatbudinxo = cms.FileInPath('Validation/HGCalValidation/data/D28.cumulative.xo'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's going to be difficult to keep this manually in sync with all the different 2023 geometry versions...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right, but could we please keep this? Upon request and on specific geometry versions I could produce the
relevant file. It is that in previous beam tests they have used the longitudinal depth barycentre (which uses this file), while it
would be nice to connect the material budget package with the validation package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this code be run by default for any phase 2 workflow? What will happen if this is run on a geometry with a different material budget?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The longitudinal depth barycentre plot will be wrong. This one here.
This is the only usage of the file currently, that is to produce that plot. Specifically, a std::map<double, double> cummatbudg -> (layer, accumulatedmaterialbudgetuptothatlayer)
is filled by that file. Then, at the core of the code, layer number is accessed as usual like:
layer = recHitTools_->getLayerWithOffset(rh_detid);
So, since this is a map even if the key doesn't exist, it shouldn't crash, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm more worried about getting wrong/nonsensical results out of the validation code. If this is acceptable to the people doing HGCal validation, we can continue in this way, but in practice it will probably mean the validation code is frequently out of sync with the latest version of the geometry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, if you think it would bring confusion I can remove that plot, the file, the data directory that was created for that reason and any relevant code. In other case, I can add to the title of the plot I sent you before something like "always check that you read the correct material budget file". What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me, a title message like that would be fine. I would specify in the title message which material budget file was used, so an observer does not have to dig through to find the python config.
minCellsEneDensperthick = cms.double(0.), | ||
maxCellsEneDensperthick = cms.double(100.), | ||
nintCellsEneDensperthick = cms.int32(200) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove unneeded blank lines
*/ | ||
|
||
#include <iostream> | ||
#include <fstream> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#include <vector>
#include <unordered_map>
// std::map<DetId, const HGCRecHit *> * hitMap_; | ||
|
||
//private data members | ||
double minEta, maxEta; int nintEta; bool useFabsEta; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
member variables should end with "_"
DetId rh_detid = hits_and_fractions[i].first; | ||
float rhFraction = hits_and_fractions[i].second; | ||
bool hitWithNoCP = false; | ||
// if(rhFraction ==0) continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete commented-out code
auto assoc = std::count_if( | ||
std::begin(cpsInLayerCluster[lcId]), | ||
std::end(cpsInLayerCluster[lcId]), | ||
[](const auto &obj){return obj.second < 0.2;}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
magic number should be named constant
continue; | ||
} | ||
|
||
// thickness = (rh_detid.det() == DetId::Forward || rh_detid.det() == DetId::HGCalEE || rh_detid.det() == DetId::HGCalHSi) ? recHitTools_->getSiThickness(rh_detid) : -1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove commented-out code
-1 Tested at: 0bec981 You can see the results of the tests here: I found follow errors while testing this PR Failed tests: RelVals AddOn
When I ran the RelVals I found an error in the following workflows: runTheMatrix-results/8.0_BeamHalo+BeamHalo+DIGICOS+RECOCOS+ALCABH+HARVESTCOS/step1_BeamHalo+BeamHalo+DIGICOS+RECOCOS+ALCABH+HARVESTCOS.log9.0 step1 runTheMatrix-results/9.0_Higgs200ChargedTaus+Higgs200ChargedTaus+DIGI+RECO+HARVEST/step1_Higgs200ChargedTaus+Higgs200ChargedTaus+DIGI+RECO+HARVEST.log25.0 step1 runTheMatrix-results/25.0_TTbar+TTbar+DIGI+RECOAlCaCalo+HARVEST+ALCATT/step1_TTbar+TTbar+DIGI+RECOAlCaCalo+HARVEST+ALCATT.log7.3 step1 runTheMatrix-results/7.3_CosmicsSPLoose_UP18+CosmicsSPLoose_UP18+DIGICOS_UP18+RECOCOS_UP18+ALCACOS_UP18+HARVESTCOS_UP18/step1_CosmicsSPLoose_UP18+CosmicsSPLoose_UP18+DIGICOS_UP18+RECOCOS_UP18+ALCACOS_UP18+HARVESTCOS_UP18.log158.0 step1 runTheMatrix-results/158.0_HydjetQ_B12_5020GeV_2018_ppReco+HydjetQ_B12_5020GeV_2018_ppReco+DIGIHI2018PPRECO+RECOHI2018PPRECO+ALCARECOHI2018PPRECO+HARVESTHI2018PPRECO/step1_HydjetQ_B12_5020GeV_2018_ppReco+HydjetQ_B12_5020GeV_2018_ppReco+DIGIHI2018PPRECO+RECOHI2018PPRECO+ALCARECOHI2018PPRECO+HARVESTHI2018PPRECO.log1306.0 step1 runTheMatrix-results/1306.0_SingleMuPt1_UP15+SingleMuPt1_UP15+DIGIUP15+RECOUP15+HARVESTUP15/step1_SingleMuPt1_UP15+SingleMuPt1_UP15+DIGIUP15+RECOUP15+HARVESTUP15.log1330.0 step1 runTheMatrix-results/1330.0_ZMM_13+ZMM_13+DIGIUP15+RECOUP15_L1TMuDQM+HARVESTUP15_L1TMuDQM/step1_ZMM_13+ZMM_13+DIGIUP15+RECOUP15_L1TMuDQM+HARVESTUP15_L1TMuDQM.log25202.0 step1 runTheMatrix-results/25202.0_TTbar_13+TTbar_13+DIGIUP15_PU25+RECOUP15_PU25+HARVESTUP15_PU25/step1_TTbar_13+TTbar_13+DIGIUP15_PU25+RECOUP15_PU25+HARVESTUP15_PU25.log10024.0 step1 runTheMatrix-results/10024.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2017_GenSimFull+DigiFull_2017+RecoFull_2017+HARVESTFull_2017+ALCAFull_2017/step1_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2017_GenSimFull+DigiFull_2017+RecoFull_2017+HARVESTFull_2017+ALCAFull_2017.log10224.0 step1 runTheMatrix-results/10224.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2017PU_GenSimFull+DigiFullPU_2017PU+RecoFullPU_2017PU+HARVESTFullPU_2017PU/step1_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2017PU_GenSimFull+DigiFullPU_2017PU+RecoFullPU_2017PU+HARVESTFullPU_2017PU.log10042.0 step1 runTheMatrix-results/10042.0_ZMM_13+ZMM_13TeV_TuneCUETP8M1_2017_GenSimFull+DigiFull_2017+RecoFull_2017+HARVESTFull_2017+ALCAFull_2017/step1_ZMM_13+ZMM_13TeV_TuneCUETP8M1_2017_GenSimFull+DigiFull_2017+RecoFull_2017+HARVESTFull_2017+ALCAFull_2017.log10824.0 step1 runTheMatrix-results/10824.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2018_GenSimFull+DigiFull_2018+RecoFull_2018+HARVESTFull_2018+ALCAFull_2018+NanoFull_2018/step1_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2018_GenSimFull+DigiFull_2018+RecoFull_2018+HARVESTFull_2018+ALCAFull_2018+NanoFull_2018.log11624.0 step1 runTheMatrix-results/11624.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2019_GenSimFull+DigiFull_2019+RecoFull_2019+HARVESTFull_2019+ALCAFull_2019/step1_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2019_GenSimFull+DigiFull_2019+RecoFull_2019+HARVESTFull_2019+ALCAFull_2019.log21234.0 step1 runTheMatrix-results/21234.0_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D21_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D21+RecoFullGlobal_2023D21+HARVESTFullGlobal_2023D21/step1_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D21_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D21+RecoFullGlobal_2023D21+HARVESTFullGlobal_2023D21.log20034.0 step1 runTheMatrix-results/20034.0_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D17_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D17+RecoFullGlobal_2023D17+HARVESTFullGlobal_2023D17/step1_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D17_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D17+RecoFullGlobal_2023D17+HARVESTFullGlobal_2023D17.log101.0 step1 runTheMatrix-results/101.0_SingleElectronE120EHCAL+SingleElectronE120EHCAL/step1_SingleElectronE120EHCAL+SingleElectronE120EHCAL.log250202.181 step1 runTheMatrix-results/250202.181_TTbar_13UP18+TTbar_13UP18+PREMIXUP18_PU25+DIGIPRMXLOCALUP18_PU25+RECOPRMXUP18_PU25+HARVESTUP18_PU25/step1_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D35_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D35+RecoFullGlobal_2023D35+HARVESTFullGlobal_2023D35.log27434.0 step1 runTheMatrix-results/27434.0_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D35_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D35+RecoFullGlobal_2023D35+HARVESTFullGlobal_2023D35/step1_TTbar_14TeV+TTbar_14TeV_TuneCUETP8M1_2023D35_GenSimHLBeamSpotFull14+DigiFullTrigger_2023D35+RecoFullGlobal_2023D35+HARVESTFullGlobal_2023D35.log
I found errors in the following addon tests: cmsDriver.py TTbar_Tauola_13TeV_TuneCUETP8M1_cfi -s GEN,SIM,DIGI,L1,DIGI2RAW --mc --scenario=pp -n 10 --conditions auto:run2_mc_GRun --relval 9000,50 --datatier "GEN-SIM-RAW" --eventcontent RAWSIM --customise=HLTrigger/Configuration/CustomConfigs.L1T --era Run2_2018 --fileout file:RelVal_Raw_GRun_MC.root : FAILED - time: date Thu Mar 7 18:08:44 2019-date Thu Mar 7 18:00:20 2019 s - exit: 34304 |
Comparison not run due to runTheMatrix errors (RelVals and Igprof tests were also skipped) |
float totalCPEnergyOnLayer = 0.f; | ||
if(maxCPId_byEnergy >=0) { | ||
totalCPEnergyOnLayer = cPOnLayer[maxCPId_byEnergy][lcLayerId].energy; | ||
energyFractionOfCPinLC = maxEnergySharedLCandCP/totalCPEnergyOnLayer; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this line generates a warning in the static analyzer
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-26099/33460/llvm-analysis/report-b756de.html#EndPath
There are 4 more https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-26099/33460/llvm-analysis/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that setting and using these variables only for couts produces these warnings. We would like to leave the information in the code in order to use it when we will need it for debugging purposes, if this is not a big problem.
Comparison is ready Comparison Summary:
|
+1
|
+upgrade |
@jfernan2,@andrius-k,@schneiml,@kmaeshima |
+1 |
+1 |
This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will now be reviewed by the release team before it's merged. @davidlange6, @slava77, @smuzaffar, @fabiocos (and backports should be raised in the release meeting by the corresponding L2) |
running workflow 29034.0 (scenario D41 with the latest V10 HGCal geometry) I get by dqmMemoryStats an addition of ~ 40MB to the DQM output (as for D35 seen above):
|
@fabiocos Indeed that's correct. Keep in mind, though, that we exclusively used ConcurrentMonitorElements, so this number will not be multiplied by the number of streams in the process. |
+1 |
PR description:
This PRs includes the work done in the context of the HGCal DPG with @rovere, @felicepantaleo.
The work was presented in HGCal DPG [1-3] and adds a package (HGCalValidator) to validate
2D layer clusters ("hgcalLayerClusters"). HGCalValidator follows the MultiTrackValidator package that is used by the
Tracker for validation purposes due its excellent design and sophisticated organisation of the code.
PR validation:
The package follows the MultiTrackValidator desing with HGCalValidator a DQMGlobalEDAnalyzer and
a Histogramming class (HGVHistoProducerAlgo), which books histograms to be stored in the DQM file.
There are also python scripts (hgcalPlots.py,makeHGCalValidationPlots.py) for generating final set
of plots with an option to generate HTML pages.
Furthermore, code has been developed to evaluate the goodness of the layer clustering
(not the (3D) object reconstruction) by establishing links and metrics between CaloParticles and Layer Clusters.
After the harvesting step, you can run the makeHGCalValidationPlots.py script e.g:
Detailed information in these presentations:
[1] https://indico.cern.ch/event/803066/contributions/3341958/attachments/1807341/2950233/20190306_HGCAL_DPG_ImagingAlgoModification_MR.pdf
[2] https://indico.cern.ch/event/800319/contributions/3325657/attachments/1799432/2934489/hgcal_dpg_2_2019.pdf
[3] https://indico.cern.ch/event/800319/contributions/3325660/attachments/1799333/2934293/HGCalValidation_20Feb19_HGCalDPG.pdf
@felicepantaleo @rovere @cseez @malgeri