-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add warning that lists legacy modules if any are configured. #34577
Conversation
These are test modules in FWCore/MessageService. When run, the output log is compared as part of a unit test. These unit tests fail when the new warning is added that lists legacy modules that are configured.
These unit tests compare log files and fail when the new warning listing configured legacy modules is added.
+code-checks Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-34577/24128
|
A new Pull Request was created by @wddgit (W. David Dagenhart) for master. It involves the following packages:
@makortel, @smuzaffar, @cmsbuild, @Dr15Jones can you please review it and eventually sign? Thanks. cms-bot commands are listed here |
please test |
-1 Failed Tests: UnitTests Unit TestsI found errors in the following unit tests: ---> test RecoMTDDetLayersTestDriver had ERRORS Comparison SummaryThe workflows 140.53 have different files in step1_dasquery.log than the ones found in the baseline. You may want to check and retrigger the tests if necessary. You can check it in the "files" directory in the results of the comparisons Summary:
|
The unit test failure is
@fabiocos Can these two modules be converted to more efficient ones quickly, or shall we add this message as part of the reference log (that then has to be changed again when these modules will be converted)? |
On the comparison differences, workflow 9.0 is showing differences across the board. I'm wondering if this could be related to #34448. I can't think of how this PR could cause those. |
@cmsbuild, please test Let's see if the 9.0 differences persist |
-1 Failed Tests: UnitTests Unit TestsI found errors in the following unit tests: ---> test RecoMTDDetLayersTestDriver had ERRORS Comparison SummarySummary:
|
please test |
-1 Failed Tests: UnitTests Unit TestsI found errors in the following unit tests: ---> test trackerMaterialAnalysisPlots had ERRORS ---> test materialBudgetTrackerPlots had ERRORS ---> test materialBudgetHGCalPlots had ERRORS Comparison SummarySummary:
|
The unit test failures in the last test are also occurring in the IBs. The unit test failure in the initial round of testing is fixed now. |
+1 |
The part of this PR outside of the Core involves the conversion of two test modules from legacy type to global type (plus in one of them I fixed a minor memory leak). Most of this PR is changes in the Core code. |
#include <sstream> | ||
|
||
#include "CLHEP/Random/RandFlat.h" | ||
|
||
using namespace std; | ||
using namespace edm; | ||
|
||
class MTDRecoGeometryAnalyzer : public EDAnalyzer { | ||
class MTDRecoGeometryAnalyzer : public global::EDAnalyzer<> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this module is using CLHEP::RandFlat::shoot
and I do not see HepRandomEngine
https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideEDMRandomNumberGeneratorService
I'm not sure it's a good idea to have a global
module in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about this, but if you look here:
https://gitlab.cern.ch/CLHEP/CLHEP/-/blob/develop/Random/src/Random.cc
You see the following line of code, which makes the engines thread local
(unless CLHEP_THREAD_LOCAL is somewhere defined as not thread local
and I think other things would be broken badly if that were the case).
static CLHEP_THREAD_LOCAL defaults* theDefaults = defaultsForAllThreads.createNewDefaults();
If the engines are thread locals. there is no data race issue. It's OK
for them to be used by global modules.
This test module is not using the random number service. That would
break replay and it means it's not getting seeds from the service, but that
is all independent of whether or not this should be global or one.
It could have issues with the seeds not generating independent
sequences...
That said, I could change it to "one" just to avoid thinking about this.
The test modules are used in only one single threaded test. At some
level, it doesn't matter which way we go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear, this is not supposed to enter any production workflow, it is just a standalone test. The random numbers are not supposed to compete with any other random number source as far as I can see
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Given that the module is only used in a standalone unit test, it isn't worth the effort to fix it to use the random service...
If someone still prefers this to be a "one", let me know and I will change it (it wouldn't take more than a few minutes). I'm most interested in getting the PR merged quickly and don't really have a preference whether it is "one" or "global". My main concern is getting the unit test to pass.
+reconstruction
|
+Upgrade The failure in the unit test is not related to this PR. |
This pull request is fully signed and it will be integrated in one of the next master IBs (but tests are reportedly failing). This pull request will now be reviewed by the release team before it's merged. @silviodonato, @dpiparo, @qliphy, @perrotta (and backports should be raised in the release meeting by the corresponding L2) |
+1 |
merge |
PR description:
This PR adds a warning that lists legacy modules that are configured in the current job. They print through the MessageLogger at the LogSystem level. Nothing prints if there are none. It looks like this:
This PR is split into 3 commits (at least when initially submitted). The 3rd commit actually adds the new warning message. The first two commits deal with unit test failures that are a consequence of this addition to the log file output. Some tests compare log file output to a reference file and fail when they are different. These need to be fixed if they contain legacy types modules.
In the first, many modules in FWCore/MessageService are migrated from legacy type to global or one type. The behavior of the modules is not changed. There is a significant amount of cleanup mixed in here. None of this should change the behavior or output of any of the test modules.
There are 3 unit tests in FWCore/Integration that start to fail. The second commit fixes these, mostly by updating the reference files.
I did not do a survey to find all legacy modules in the Framework. I only looked at ones where the unit tests started to fail. Also note that we are intentionally leaving some legacy modules in the Framework to ensure legacy modules continue to work until the point in time where support for legacy modules is completely dropped. We want them to continue to function properly in the temporary period while we migrate away from them. I don't know how many legacy modules remain in the Framework and whether all of them are needed for this purpose.
PR validation:
Mostly relies on existing unit tests. Some of these were updated to keep them working. Three test were modified to check the output of this new warning by comparing a log file with a reference file containing the warning.