-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing strategy for rustc_save_analysis #34481
Comments
cc @nrc. What do you think of the testing approach? Who can I ask for feedback on the integration with the testing system? |
Looks reasonable to me. Agree it should be its own set of tests. |
@brson What should I do to let the build system run the tests? Do we need to mess with make files? |
Agree on a separate set of tests. Beyond that I think there are a few fundamental questions to address, starting with - what output should be tested? We can test the API which is probably the lowest level to test, we could test the internal dumpers, which is probably the maximal amount of info to test with the minimal amount of processing, or we could test the output of -Zsave-analysis, which is probably the easiest thing to test (since it needs the least automation) but is the maximal amount of processing (which I guess might also be an advantage). We then need to think about how tests can be generated. My worry here is that we'll put a lot of effort into the infrastructure, but that without the leg-work to actually write individual tests, it will go to waste. I think my preference is to test the -Zsave-analysis output. This should be quite easy and ergonomic - a test file would be the program to be compiled plus a JSON struct which must be present in the output (or at least there must be a super-struct of it in the output). I think that makes tests easiest to hand-write and auto-generate. It might also be nice to have some way to specify spans by inline comments, rather than having to spell them out inside the JSON, but that could be step 2. |
cc @rust-lang/tools |
We've had a pretty big amount of success so far having the process of adding a test look like:
So along those lines the testing strategy sounds great to me. This'll probably just be a small addition to the I agree with @nrc though that we wouldn't want the test cases to be too onerous to write, but @nikomatsakis has drummed up a script for the ui tests where you can generate the expected output, so perhaps that could be done here as well? That is, have a script that can generate the output once, it can be hand verified, and then it prevents regressions? |
I have added a WIP implementation on #34522 |
Jotting down my recent thoughts on testing the save-analysis output (we still have nothing beyond the smoke tests in run-make): Requirements:
I think an ideal way of doing this is to have a test directory with one program per file, then use comments to identify spans and the info we expect at a given span. However, I'm not sure how to specify more advanced info such as relating a def to a ref or that we expect some info to be the children of other info, etc. An alternative would to match source files with expected output files (by filename, I expect), but I think we'd need to normalise for whitespace and other JSON structure and to allow wildcards and possibly variables (e.g., say that the id field of a def is the same as a ref without specifying the value). We'd probably want the expected output to be a minimum expected set, rather than a complete set too. |
Save-analysis has been removed from the compiler. |
Context
There is currently only one test for
save_analysis
. This test ensures that the compiler doesn't crash when dumping crate information as a text file. The lack of further tests allows bugs to be (re)introduced (e.g. #33213) and makes it difficult to make modifications to the API with confidence. It is important to increase the reliability ofsave_analysis
, since the future RLS will be implemented on top of it.Requirements
Ideally, we should come up with a clear testing architecture to deal with this problem. It would need to:
Dump
trait and collects all available information about a program inVec
s (see below).Integrating with rustc's testing system
Currently, the only test is in
src/test/run-make/save-analysis
. This makes sense for checking whether ICEs are triggered, but is probably unsuitable for the fine-grained testing approach described above.We could follow the approach of rustdoc and the pretty printer. For instance, we could add a new directory (
src/test/save-analysis
) and put the tests there. We should probably begin with single-file tests.A possible way to encode the constraints of the tests is through comments, as shown below:
Notation:
^^^
: the span of the item.1.3-1.5
: a span beginning on the third column of the first line and ending on the fifth column of the first line.The text was updated successfully, but these errors were encountered: