-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update data files for DropBoxMetadata tag #40522
Conversation
+code-checks Logs: https://cmssdt.cern.ch/SDT/code-checks/cms-sw-PR-40522/33727
|
A new Pull Request was created by @malbouis for master. It involves the following packages:
@malbouis, @yuanchao, @cmsbuild, @saumyaphor4252, @ggovi, @tvami, @ChrisMisan, @francescobrivio can you please review it and eventually sign? Thanks. cms-bot commands are listed here |
please test |
+1 Summary: https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-b59c0c/29986/summary.html Comparison SummarySummary:
|
@malbouis
the corresponding tag for prod for the strip bad components has
and it has been set like this on purpose.
? |
Hi Marco, as far as I understand, the Prep DB tag is only used for the replays in this case. All replays do the upload to the Prep DB. If the synchronisation of the tag being uploaded by a replay is pcl, it will not be possible to do the upload as the run from the replay is in the past, therefore the change in this PR. I don't see anything wrong with it as we are not touching the Prod DB. Is the Prep DB tag from the strips bad component used for anything other than the replays? If so, I would recommend moving it to Prod DB. Thanks. |
I think the intention of the original author was to have the same setup as used in production. Is it relevant that replays re-upload the same conditions on the prep tag?
I don't think so |
We would like to always make sure the upload from Tier0 is successful. At the moment, we are getting an error from the Tier0 uploader when doing the replays and this is sub-optimal.
Then we are good, right? Since we would like to have a successful upload from the Tier0 uploader during the replays, we could use this new tag created in this PR. |
well, I am still unsure why you want to have a successful upload. The uploads were rejected without any problems in the past 10+ years of operations. |
I think it is an extra layer of checks that all is as expected in a replay. If the tag is not used for anything else, I also don't see why it cannot be used for the original intent (replays). |
the fact that the upload is rejected and that follows the rules of the |
But if the Prep DB is for the replays, I don't see how someone would like to even Prep and Prod in the future, knowing that Prep DB is used only for the replays. Even if they do, the synchronisation of the prod tags should be kept in any case as we cannot have a tag with 'any' synch in prod. I honestly don't see how someone would try to even out the tags between prep and prod. And I still think it is more productive to have a successful upload (to eventually check the uploaded conditions to prep, it does't have to be at every replay, but it might be useful ar some point) than just having a failure and getting used to ignore upload failures. |
So - you plan to leave the prep tag unsyncronized? Mmh, not a very realistic setup. |
hi @mmusich
As you know, this year we had a few replays where the replays were "successful", but when we plugged it in to real operations we had issues. Those issues could have been discovered by reading the logs. This situation triggered the T0 team to take warnings/errors in the logs more seriously. We'd prefer not to have this error come up in all the cases of the replays... What do you think? |
OK, that's good to know.
some times, one does "preplays" though right? especially when you are not really sure if Prompt will come out right or not. I saw that happening at least a couple of times in the past. |
I think that in this specific case of the replays the To sumarize:
I would like, if possible to move forward with this PR. Marco, please let us know if there is any objection from your side. |
This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will now be reviewed by the release team before it's merged. @perrotta, @dpiparo, @rappoccio (and backports should be raised in the release meeting by the corresponding L2) |
I wouldn't say we have converged, rather we have established that we have different perceptions of what actually matters to test in a replay. Having said that, this PR is rather unconsequential, as the actual change in the Tier0 replay behavior hinges on a change of the Global Tag used there and not on the content of the release. |
hold
|
Pull request has been put on hold by @rappoccio |
Ciao @mmusich,
but the syncronization checks are not a feature of the Tier0 daemon right?
Yes I think that's the case 😄 As Helena mentioned already earlier in the thread, with this PR we want to make sure the replays are able to upload conditions at each replay, i.e. to check that there is no issue with the upload it self (e.g. possible authentication failures, old passwords in Tier0, etc..., all cases that happened in 2021 after the LS2 break). |
Hi @rappoccio
I see this comment from Marco as no objection to get this PR merged |
well, no.
why not? Part of the PCL concept hinges precisely on the "too late to be accepted" (see detailed explanation at #40522 (comment))
all of this can be checked as well with an upload failure - just read the logs :) |
but the point is that if the T0 team will make errors popping up based on the logs then all replays will be unsuccessful for a know feature |
Certainly the log parsing can be adapted to distinguish between expected failure and real failure. |
@mmusich , AlCaDB would really like to have this PR merged, as we have been dealing with many replays and this would ease the evaluation of the replays for us. |
@malbouis, as far as I can tell nothing is holding this PR. |
@rappoccio |
unhold
|
This pull request is fully signed and it will be integrated in one of the next master IBs (tests are also fine). This pull request will now be reviewed by the release team before it's merged. @perrotta, @dpiparo, @rappoccio (and backports should be raised in the release meeting by the corresponding L2) |
+1 |
PR description:
This is to update the json files that are used to produce the DropBoxMetada tag. Some of the tags (
SiStripBadChannel_PCL_v1_prompt
,BeamSpotObjects_PCL_byRun_v1_prompt
,BeamSpotObjects_PCL_byLumi_v1_prompt
) had pcl synchronisation in Prep DB and this was preventing upload from Tier0 replays. This PR replaces the faulty tags and also updates a few other ones to have a different name between Prod and Prep tags to try avoid such a case of wrong synch in Prep DB in the future.The difference between the new DMD tag produced with the changes introduced here and the previous one can be seen in https://cern.ch/go/LC9S
The new DMD tag has been included in the 126X_dataRun3_Express_Queue and will be picked up when a new Express GT is created.
PR validation:
126X_dataRun3_Express_Queue
If this PR is a backport please specify the original PR and why you need to backport that PR. If this PR will be backported please specify to which release cycle the backport is meant for:
Not a backport.