-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understand what is leading to schema mismatches for PGO #51908
Comments
I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label. |
For the dynamic PGO cases (with SPMI) we should have both the Tier0 and the Tier1 compilations of a method in the collection. However there is no explicit link between the two. So probably simplest to find a method with a distinctive name and hope there are just a few instances. It would also be good to track down a mismatch in a live run, if possible. For static PGO mismatches we might want to tighten up the compatibility check, though the outcome is similar: we'd end up dropping the data. We still might prefer this as it seems possible presenting unrelated class profiles to the devirtualization machinery might have unexpected effects. Another option to explore is whether we could make better use of "stale" PGO data; however, this seems tricky, unless we can serialize or reconstruct the flowgraph (plus possibly other breadcrumbs) from the build where the data was collected. All the profile data is keyed to IL offsets currently. That makes it fairly fragile. We could try and find a keying scheme that might be able to survive minor edits -- say block numbers and call site indexes / descriptors. |
I'm wondering if we could go all the way back to source code line numbers when those are available. At least the AutoFDO paper seems to suggest that this is fairly robust for C++. Not sure if we can access the PDBs like this during JIT, though. |
For the schema mismatches: I modified the jit so that when it does Tier0 instrumentation, it optionally asks the runtime for the pgo data back right after allocating the schema, and then compares the submitted and returned schemas. This tests round-tripping the schema through the runtime. While that may seem trivial, the schema is stored in compressed format, so this check actually verifies the compression/decompression and lookup logic in the runtime. With this I am able to see schema match failures running a checked jit in ASP.NET scenarios -- though only scenarios where some R2R code is also in play. If R2R is disabled, then the schemas always match. I have SPMI collections for the runs where mismatches happened, so should be able to replay what the jit sees. Will post an update shortly. |
After analyzing further, the schema mismatches are from schemas captured in R2R images, matching up what the jit expects to see at Tier1. Here's one problematic method: R2R dump shows the following PGO payload:
but looking at method IL, offsets 22 (0x16) and 28 (0x1c) are not valid IL offsets.
During instrumentation the jit produces the following schema:
So there are 6 edge probes and 3 class probes. From the R2R we see 8 edge probes. Now it could be the version of this method that was instrumented has different IL. I don't know how to check this. But the source for this method has not changed too much recently: runtime/src/libraries/System.Linq/src/System/Linq/Cast.cs Lines 21 to 30 in 01b7e73
So something seems to be going wrong with this method either in ETL->MIBC or MIBC->R2R processing. Possibly that first class probe at IL offset 0x20 (which has two schema records) gets munged into edge records, and the other two class probes get dropped? Or perhaps we're instrumenting a 5.0 version...? I'll see if I can find the relevant data in the profile stream. |
The 5.0.5 System.Linq assemblies do not appear to have So that seems to suggest that the IL that was instrumented should have been the same IL that is getting optimized, and so the schemas should match. |
Here's the MIBC data
|
From the MIBC I wonder if some other method's data is getting merged in here? Also note in the MIBC there are class probes at offsets 32/58/111 which match the jit schema, but that don't make it to the R2R version. I suppose this happens because the type handles refer to unknown classes? So only the offset 23 class handle table is suspicious... both because it does not seem to match up with jit instrumentation, and because when pulled into the R2R format it gets "converted" into a spurious edge probe. I looked at the "constituent" MIBCs (for hello world, etc) and they all show this offset 23 entry. |
After looking at some of the cases of pgo schema mismatch (dotnet#51908), the pgo schemas still seem to have partial validity, so by default the jit will now use the schema and data even for mismatches. Add the ability to optionally assert on mismatches. Also add a mode where the the jit compares the schema it provided to the the runtime with the schema the runtime gives back to the jit, to try and locate possible schema compression/decompression issues that might lead to the sorts of mismatches we're seeing. This turns out not to have found any issues. All the mismatches we see are from schemas provided to the jit via static PGO. But it seems like a useful capability to retain.
Not sure if this is what causes the problems above, but it looks to be wrong: runtime/src/coreclr/tools/Common/Pgo/PgoFormat.cs Lines 505 to 514 in 907fc3e
|
I ran the optimization scenarios locally and the individual and scenario merged MIBC files do not show the same data corruption as I'm seeing above. Here's one:
but if I dump the the PGO data from the System.Linq.dll in the SDK used to run these scenarios (6.0.0-preview.5.21226.1) it is not right, there should not be edge probes at offsets 22 and 28.
So I still can't figure out where things are going wrong... |
Not sure if this is relevant, but there is a "nearby" method (one method def token higher)
that has data that matches the "interference" seen in |
@davidwrighton see above for some details on the odd schema mismatches that show up in SDK resident PGO data. I don't know if these were a one-off or are a persistent feature. I'll look again in a few days. |
This allows dotnet-pgo to generate .mibc files using the sample data stored in the trace that it is processing. It implements support for both last branch record (LBR) data and normal IP samples. The latter can be produced using PerfView as normal while the former currently requires using xperf with LBR mode enabled. For posterity, to enable both logging required .NET events and LBR, the following commands can be used (on Windows): ``` xperf.exe -start "NT Kernel Logger" -on LOADER+PROC_THREAD+PMC_PROFILE -MinBuffers 4096 -MaxBuffers 4096 -BufferSize 4096 -pmcprofile BranchInstructionRetired -LastBranch PmcInterrupt -setProfInt BranchInstructionRetired 65537 -start clr -on e13c0d23-ccbc-4e12-931b-d9cc2eee27e4:0x40000A0018:0x5 -MinBuffers 4096 -MaxBuffers 4096 -BufferSize 4096 scenario.exe xperf.exe -stop "NT Kernel Logger" -stop clr -d xperftrace.etl ``` SPGO does not currently do well with optimized code as the mapping IP<->IL mappings the JIT produces there are not sufficiently accurate. To collect data in tier-0 one can enable two environment variables before running the scenario: ``` $env:COMPlus_TC_QuickJitForLoops=1 $env:COMPlus_TC_CallCounting=0 ``` When samples are used the associated counts will not typically look valid, i.e. they won't satisfy flow conservation. To remedy this, dotnet-pgo performs a smoothing step after assigning samples to the flow-graph of each method. The smoothing is based on [1] and the code comes from Midori. The commit adds some new commands to dotnet-pgo. The --spgo flag can be specified to create-mibc to use samples to create the .mibc file. Also, even if --spgo is specified, instrumented data will still be preferred if available in the trace. If spgo is not specified, the behavior should be the same as before. --spgo-with-block-counts and --spgo-with-edge-counts control whether dotnet-pgo outputs the smoothed block or edge counts (or both). By default block counts are output. The JIT can use both forms of counts but will be most happy if only one kind is present for each method. --spgo-min-samples controls how many samples must be in each method before smoothing is applied and the result included in the .mibc. SPGO is quite sensitive to low sample counts and the produced results are not good when the number of samples is low. By default, this value is 50. The commit also adds a new compare-mibc command that allows to compare two .mibc files. Usage is dotnet-pgo compare-mibc --input file1.mibc --input file2.mibc. For example, comparing a .mibc produced via instrumentation and one produced via SPGO (in tier-0) for some JIT benchmarks produces the following: ``` Comparing instrumented.mibc to spgo.mibc Statistics for instrumented.mibc # Methods: 3490 # Methods with any profile data: 865 # Methods with 32-bit block counts: 0 # Methods with 64-bit block counts: 865 # Methods with 32-bit edge counts: 0 # Methods with 64-bit edge counts: 0 # Methods with type handle histograms: 184 # Methods with GetLikelyClass data: 0 # Profiled methods in instrumented.mibc not in spgo.mibc: 652 Statistics for spgo.mibc # Methods: 1107 # Methods with any profile data: 286 # Methods with 32-bit block counts: 286 # Methods with 64-bit block counts: 0 # Methods with 32-bit edge counts: 0 # Methods with 64-bit edge counts: 0 # Methods with type handle histograms: 0 # Methods with GetLikelyClass data: 0 # Profiled methods in spgo.mibc not in instrumented.mibc: 73 Comparison # Methods with profile data in both .mibc files: 213 Of these, 213 have matching flow-graphs and the remaining 0 do not When comparing the flow-graphs of the matching methods, their overlaps break down as follows: 100% █ (1.9%) >95% █████████████████████████████████▌ (61.0%) >90% ████████ (14.6%) >85% ████▏ (7.5%) >80% ████▋ (8.5%) >75% █▊ (3.3%) >70% █ (1.9%) >65% ▎ (0.5%) >60% ▎ (0.5%) >55% ▏ (0.0%) >50% ▏ (0.0%) >45% ▏ (0.0%) >40% ▎ (0.5%) >35% ▏ (0.0%) >30% ▏ (0.0%) >25% ▏ (0.0%) >20% ▏ (0.0%) >15% ▏ (0.0%) >10% ▏ (0.0%) > 5% ▏ (0.0%) > 0% ▏ (0.0%) (using block counts) ``` I also made the dump command print some statistics about the .mibc that was dumped. Hopefully some of this tooling can help track down dotnet#51908. [1] Levin R., Newman I., Haber G. (2008) Complementing Missing and Inaccurate Profiling Using a Minimum Cost Circulation Algorithm. In: Stenström P., Dubois M., Katevenis M., Gupta R., Ungerer T. (eds) High Performance Embedded Architectures and Compilers. HiPEAC 2008. Lecture Notes in Computer Science, vol 4917. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-77560-7_20
This allows dotnet-pgo to generate .mibc files using the sample data stored in the trace that it is processing. It implements support for both last branch record (LBR) data and normal IP samples. The latter can be produced using PerfView as normal while the former currently requires using xperf with LBR mode enabled. For posterity, to enable both logging required .NET events and LBR, the following commands can be used (on Windows): ``` xperf.exe -start "NT Kernel Logger" -on LOADER+PROC_THREAD+PMC_PROFILE -MinBuffers 4096 -MaxBuffers 4096 -BufferSize 4096 -pmcprofile BranchInstructionRetired -LastBranch PmcInterrupt -setProfInt BranchInstructionRetired 65537 -start clr -on e13c0d23-ccbc-4e12-931b-d9cc2eee27e4:0x40000A0018:0x5 -MinBuffers 4096 -MaxBuffers 4096 -BufferSize 4096 scenario.exe xperf.exe -stop "NT Kernel Logger" -stop clr -d xperftrace.etl ``` SPGO does not currently do well with optimized code as the mapping IP<->IL mappings the JIT produces there are not sufficiently accurate. To collect data in tier-0 one can enable two environment variables before running the scenario: ``` $env:COMPlus_TC_QuickJitForLoops=1 $env:COMPlus_TC_CallCounting=0 ``` When samples are used the associated counts will not typically look valid, i.e. they won't satisfy flow conservation. To remedy this, dotnet-pgo performs a smoothing step after assigning samples to the flow-graph of each method. The smoothing is based on [1] and the code comes from Midori. The commit adds some new commands to dotnet-pgo. The --spgo flag can be specified to create-mibc to use samples to create the .mibc file. Also, even if --spgo is specified, instrumented data will still be preferred if available in the trace. If spgo is not specified, the behavior should be the same as before. --spgo-with-block-counts and --spgo-with-edge-counts control whether dotnet-pgo outputs the smoothed block or edge counts (or both). By default block counts are output. The JIT can use both forms of counts but will be most happy if only one kind is present for each method. --spgo-min-samples controls how many samples must be in each method before smoothing is applied and the result included in the .mibc. SPGO is quite sensitive to low sample counts and the produced results are not good when the number of samples is low. By default, this value is 50. The commit also adds a new compare-mibc command that allows to compare two .mibc files. Usage is dotnet-pgo compare-mibc --input file1.mibc --input file2.mibc. For example, comparing a .mibc produced via instrumentation and one produced via SPGO (in tier-0) for some JIT benchmarks produces the following: ``` Comparing instrumented.mibc to spgo.mibc Statistics for instrumented.mibc # Methods: 3490 # Methods with any profile data: 865 # Methods with 32-bit block counts: 0 # Methods with 64-bit block counts: 865 # Methods with 32-bit edge counts: 0 # Methods with 64-bit edge counts: 0 # Methods with type handle histograms: 184 # Methods with GetLikelyClass data: 0 # Profiled methods in instrumented.mibc not in spgo.mibc: 652 Statistics for spgo.mibc # Methods: 1107 # Methods with any profile data: 286 # Methods with 32-bit block counts: 286 # Methods with 64-bit block counts: 0 # Methods with 32-bit edge counts: 0 # Methods with 64-bit edge counts: 0 # Methods with type handle histograms: 0 # Methods with GetLikelyClass data: 0 # Profiled methods in spgo.mibc not in instrumented.mibc: 73 Comparison # Methods with profile data in both .mibc files: 213 Of these, 213 have matching flow-graphs and the remaining 0 do not When comparing the flow-graphs of the matching methods, their overlaps break down as follows: 100% █ (1.9%) >95% █████████████████████████████████▌ (61.0%) >90% ████████ (14.6%) >85% ████▏ (7.5%) >80% ████▋ (8.5%) >75% █▊ (3.3%) >70% █ (1.9%) >65% ▎ (0.5%) >60% ▎ (0.5%) >55% ▏ (0.0%) >50% ▏ (0.0%) >45% ▏ (0.0%) >40% ▎ (0.5%) >35% ▏ (0.0%) >30% ▏ (0.0%) >25% ▏ (0.0%) >20% ▏ (0.0%) >15% ▏ (0.0%) >10% ▏ (0.0%) > 5% ▏ (0.0%) > 0% ▏ (0.0%) (using block counts) ``` I also made the dump command print some statistics about the .mibc that was dumped. Hopefully some of this tooling can help track down #51908. [1] Levin R., Newman I., Haber G. (2008) Complementing Missing and Inaccurate Profiling Using a Minimum Cost Circulation Algorithm. In: Stenström P., Dubois M., Katevenis M., Gupta R., Ungerer T. (eds) High Performance Embedded Architectures and Compilers. HiPEAC 2008. Lecture Notes in Computer Science, vol 4917. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-77560-7_20
I'm not seeing these anymore so will close. |
#49793 disabled an assert in
EfficientEdgeCountReconstructor::Propagate()
that was checking for schema mismatches. Re-enabling this leads to about 4600 mismatch reports from recent SPMI collections.We should understand what is behind these mismatches, as they cause us to throw away PGO data and miss out on optimization opportunities. Possible causes:
Initial focus should be on the dynamic case where we can be fairly certain that the IL is the same and that mismatches should generally never happen.
The text was updated successfully, but these errors were encountered: