-
Notifications
You must be signed in to change notification settings - Fork 696
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QA effort for Cabal #8865
Comments
That's a great idea. Another example is tests that are hard to automate and/or hard to triage failures of when automated. E.g., the stuff I mentioned here: #8788 (comment). I bet there are many other kinds of checks that don't fit in CI, including those you mention, where problems are trivial to spot by a human in normal usage. |
Well, I would say that we lack the very basis of unit and integration tests discipline. |
It would be very useful, for all the reason stated above. I wonder what is the best way to onboard QAers (e.g. providing binaries or have them compile cabal?). If we had a set of users running HEAD even for their personal projects only, that would pick up a lot of things testing does not pick up now. |
Yes I'm thinking of making them use HEAD and self-compile. |
This sounds like you want nightlies for cabal? |
@hasufell No, not because I disagree with the concept of nightly bindists but because I try very hard not to take suggestions like these as entries in my Christmas wish list. So, you say "nightlies for cabal", I hear "another CI sub-system to maintain upon which testing will depend". Maybe I'm wrong but I think we should start with self-compilation of HEAD. |
Alright, the effort has started, thank you to everyone who sent feedback, here and elsewhere. We had great input from @tamara-mandziuk about how it could be improved, and this has certainly given food for thought. |
From what I could observe during the past couple of releases, it would seem that we would get a good amount of solid feedback if we systematise the use of manual quality assurance (QA) processes to supplement the test suite for the patches that have an impact on
cabal-install
.Example: #8843
This is pretty trivial and certainly could have been caught much earlier.
Moreover, tests written by developers of a feature tend to be more vulnerable to the author's cognitive biases. However, if we ask of people the higher-level behaviour that should be checked later on by another human, we can:
Proposal
What I want to propose is twofold:
cabal-install
This will also allow us to catch usability regressions earlier on, and this is a very good opportunity to onboard more people as this makes them feel involved in the development process.
There are real benefits to gain from this that go beyond strictly "testing" things.
I'd like to get the comments from the community of Cabal contributors on this.
The text was updated successfully, but these errors were encountered: