-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cdash_analyze_and_report.py: summary statistics mode #580
Comments
Copied to new issue as discussed here: #578 (comment) |
@arghdos, thanks for adding this issue :-)
Can you mock up what you are wanting this to look like? Note that one can write auxiliary tools as well that use the data downloaded by
Note that it is fine if auxiliary tools use more than just standard Python 3.x (we just don't want the core functionality of the tool
We have a similar breakdown into subprojects with Trilinos packages. For Trilinos, all tests are prefixed with the package name so it is easy to categorize them just from the test name. However, if you need access to the labels, it looks like that will require a CDash extension as the current test query REST API
(For example, see here). Adding test labels is something we can ask Kitware to put in for us (and in fact, that is already on our CDash backlog, which is a very long list).
That is similar to what we started for GitHub Issues with the Grover tool described in: The initial implementation (i.e. "Minimum Viable Product") of Grover is very simple and just gives the status of the tests associated with each issue tracker on a regular (weekly) basis. With that tool, most of the heavy lifting is done with code in the module CDashQueryAnalyzeReport.py. For example, the class CDashQueryAnalyzeReport.IssueTrackerTestsStatusReporter is used in the Grover tool to create the HTML text for the status of the tests to add to a GitHub Issue comment. (There is some more work to be done with these and related tools to make this a more sustainable process.) For our use case, some of those remaining features are listed in the remaining Tasks in trilinos/Trilinos#3887 (comment). But that project ended and the effort to maintain a larger set of customer-focused Trilinos builds and tests went away (due to lack of funding and staffing). Therefore, there has not been much development on the tool in a couple of years (but there are some internal customers using it). But there is some hope of continuing that work with some other internal customers in FY24 (hence, good timing for this interaction). |
Agreed -- I can take a look at this over the next week or two, but I like the idea of having multiple related tools that can drive seperate reports / functionality, while using the same module. Point noted about the wrinkle of extra dependencies. I can perhaps use some of those tools (matplotlib, tabulate, pandas -- would be my list) as part of the mock-up and learn how the current table-gen works / we can decide where it should live after the fact.
Ahhh, I had been experimenting with direct DB access before this, and assumed everything was visible via REST that is visible there. I have been enforcing a strict naming convention however, so I should be OK relying on a similar string-parsing technique like you mention for Trillinos. Will have to take a look at Grover as well :) |
I would try to avoid direct DB access because Kitware does make changes to the DB schema from time to time. The REST API is supposed to be a more stable way to access the data (and they are still on 'api/v1' for the last 9 years or so since they first added that feature). It is not a big deal to have them add labels and a few other fields to the
Currently Grover is not open-source so you can't see the full tool. But the HTML text for the GitHub comment is completely created by the class CDashQueryAnalyzeReport.IssueTrackerTestsStatusReporter. The code that is unique to Grover must just does the communication with GitHub to take that text and put it in the GitHub issue comment. The other bit of code called by Grover after reading in the tests list of dicts (testsLOD) from a file (a file written by the tool |
For my use case, I am interested in reporting summary statistics, e.g., # of passing tests, # of failed / missing tests, and a % pass rate, broken down over time for various issue tracker types (read: JIRA instances), subprojects, and/or even different GPU architectures.
Specifically, my use-case tracks tickets on a number of internal, and customer-facing JIRA instances, and I would like to (at a glance) be able to share build quality information with folks that I strongly suspect will never actually click through to the dashboard. Ideally, I'd like to have both the information for the current build, and some sort of time-average (or plot, but I noted you said "standard Python 3.x" in #577 @bartlettroscoe, which probably precludes matplotlib) of the build quality results over say, the last 30 days.
In addition, I also have tests in our test-suite broken down via subproject / ctest-label into things like "runtime" / "compiler" / "hip_and_omp_interop", etc., which form natural groupings for us internally to see how various components we ship are doing. I would also probably want to add a similar type of reporting as described for the JIRA instances above (build quality, time history).
A future goal would be JIRA integration (assuming such a thing can be done programmatically using minimal dependencies) to automatically pull in ticket status / days open / priorities / etc., to combine with the summary stats table.
I am happy to do the legwork on this, but I'll need some guidance on the design front I suspect.
The text was updated successfully, but these errors were encountered: