Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New unsupported features table for assess #1819

Closed
tedinski opened this issue Oct 31, 2022 · 0 comments
Closed

New unsupported features table for assess #1819

tedinski opened this issue Oct 31, 2022 · 0 comments

Comments

@tedinski
Copy link
Contributor

Using the MIR linker with assess changes what we codegen (to just that reachable from the assessed crate), and consequently changes what "unsupported features" are hit during codegen (unreachable code is never codegen'd). This is a change from the legacy linker, and merits thinking about what we want to see from the assess report.

I think this change is probably a positive one: We're assessing the crate being built, so if there's code in dependencies that's never used, we shouldn't complain about it. But it makes it hard to reproduce our old metrics for unsupported code in crates, and it raises the question of what information should be reported in the unsupported features table.

I think we want to see two tables:

  1. Early report on what unsupported features were found during codegen, as we do now, but with new columns. Currently we report two columns (number of crates affected, total instances of construct). I think we should instead report (instances in primary crate, instances potentially reachable in dependencies, and maybe also total?).
  2. Later on, after running tests, we'll also want to see unsupported features we actually hit in the tests. I'm not sure yet if this should be its own table, or reproduce the original table with a new column (and new sort order on that column).

I believe we should then be able to reproduce our original metrics, as "instances in primary crate" (run for every crate and then aggregated) should be enough to compute the original metric (as originally, we actually did still skip things that were unreachable from "pub" items).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants