Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try cargo-nextest in CI for better failure reporting #787

Closed
david-crespo opened this issue Mar 18, 2022 · 7 comments
Closed

Try cargo-nextest in CI for better failure reporting #787

david-crespo opened this issue Mar 18, 2022 · 7 comments
Assignees
Labels
Testing & Analysis Tests & Analyzers

Comments

@david-crespo
Copy link
Contributor

As discussed in chat, our test output in CI makes it hard to figure out what tests failed. In one very typical case, the failure was buried 500 lines above the bottom of the output. Running with --verbose removed did not help, as there is still a lot of noise. cargo-nextest has much nicer output (see below).

Testing this may be as simple as adding a cargo install cargo-nextest step to the CI config and updating the test command. Downloading the binary would be faster, but building from source isn't that slow to start out with.


Clean, beautiful user interface. nextest presents its results concisely so you can see which tests passed and failed at a glance.

https://nexte.st/index.html

image

image

@jclulow
Copy link
Collaborator

jclulow commented Mar 19, 2022

Would it be possible to have whichever test runner it is (cargo test or cargo nextest, etc) output more structured information about tests, both passing and failing? If in addition to the console output, the test runner was able to create a regular file with a structured report, it would be pretty easy to surface those results specifically in the buildomat diagnostics. Something like TAP or JUnit test reports, perhaps?

I suspect there would be at least two benefits:

  • we wouldn't have to dig through the output to find failure, they could be lifted out by the CI machinery
  • we could report on pass/failure rates, durations, etc, over time as we collect more of these report files.

@sunshowers
Copy link
Contributor

Nextest author here—it has JUnit support for exactly that use case :) https://nexte.st/book/junit.html

@david-crespo
Copy link
Contributor Author

@sunshowers Great! Can you do that with a CLI param instead? Don’t want to make a whole config file for one thing, though if I have to that’s fine.

@sunshowers
Copy link
Contributor

No, we restricted it to being in configs because we'd like the flexibility to add more JUnit-related options in the future without bloating the CLI. Hopefully it's not too much work, just a few lines of text.

@david-crespo
Copy link
Contributor Author

Yeah, it’s fine. We have like 50 flags in our command that we can put in there instead.

@sunshowers
Copy link
Contributor

Now that I work at Oxide :) I'd love to take this on at some point after we ship.

@sunshowers sunshowers self-assigned this Jan 12, 2023
@jordanhendricks
Copy link
Contributor

Per #3683, it seems like we can close this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Testing & Analysis Tests & Analyzers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants