Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking issue: Improving Assess for analyzing many crates #2138

Open
1 of 16 tasks
tedinski opened this issue Jan 20, 2023 · 0 comments
Open
1 of 16 tasks

Tracking issue: Improving Assess for analyzing many crates #2138

tedinski opened this issue Jan 20, 2023 · 0 comments
Labels
[C] Internal Tracks some internal work. I.e.: Users should not be affected.

Comments

@tedinski
Copy link
Contributor

tedinski commented Jan 20, 2023

This issue is meant to document improvements I think would be helpful after trying out assess in the wild:

Two new tables would be helpful:

  • Assess should classify failures to build/analyze packages #2058 - Presently you need to look through the log output, which is not hard, but it requires effort for something that should be easily summarizable. Further, I think it'd be cool if we can link known ones to open issues on our repo. :)
  • New unsupported features table for assess #1819 - The second half. We now see unsupported_construct get hit by tests, so we really need the unsupported features table to have columns for features actually hit by tests and sort according to that data.
  • The "successful tests" table could be aggregated by file. When I had a lot of results (600+ tests) ignoring the tests themselves and looking at the files containing any successful tests was most interesting. (Although longer term it might be more interesting based on coverage...)

Things to investigate:

  • The "Reason for failure" table contains entries that look like missing_definition and ioctl, which is odd. I'd expect one or the other, not both. This needs investigation, as it's probably a bug in how we parse CBMC properties results.
  • Might want to start tracking possible targets for "default stubs" that Kani (or assess) might ship with turned on by default. Started to see some interesting possibilities: clock_gettime, uname, rdtsc, getrlimit, sysconf, __libc_current_sigrtmin
  • I had to add a memory limit with ulimit -v, but I don't see crashed/killed verifier runs reported separately in that test failure table. I'm not sure what it's doing: skipping them possibly? I need to look into this.

Problems:

Nice-to-haves:

  • There's a few things like --only-codegen we pass down through scan, but this list should be expanded. --all-features in particular is needed. -j --ignore-global-asm

Standing issues:

@tedinski tedinski added the [C] Internal Tracks some internal work. I.e.: Users should not be affected. label Jan 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
[C] Internal Tracks some internal work. I.e.: Users should not be affected.
Projects
None yet
Development

No branches or pull requests

1 participant