-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve message when resolvers don't resolve the reference #5644
Comments
Seems like there's a similarly cryptic error if an instrumented test fails to compile. |
Hi @dgibson thanks for reporting this. You are right that we should find a way how to add more information when the test wasn't resolved, then For the second part of your issue, when the file is executable, It is a little bit tricky and I understand your confusion. But IMO it is a correct behavior, because of badly named test methods the file can't be resolved as avocado-instrumented. Therefore, avocado will try other resolvers and since it is an executable it will be resolved as exec-test and it will be run as executable. In this case, unfortunately, I can't see how we can find out that this was a mistake and the user doesn't want to run it as exec-test. |
We can change the message, the error message from:
to
This way, we will point users to a list option which can provide more information about resolving references.
|
That would be an improvement. I think it's still worth considering a stronger warning if avocado scans a Python file which has classes derived from I realized that the case where you explicitly point to a file which doesn't resolve is a but confusing, but a much more dangerous version of this problem occurs when (for example) pointing at a whole directory of (potential) tests: A syntax error, or certain top-level content errors in the Python can prevent anything from resolving in the file. If running a whole directory of tests, it may not be obvious to the user that a batch of tests are now being omitted entirely, because the file containing them isn't resolving any more. I've been caught by this several times in practice while actively developing tests already. |
For such cases, you should use |
That's missing the point. The issue here is that it's very easy to not even realize that some of the tests are broken. Once you realize that it's straightforward enough to figure out why using Having encountered this, I've realized I don't actually very much like the approach used by Avocado, amongst others, of auto-detecting tests, rather than having an explicit test manifest of some sort.
|
IMO users should use
I understand your problem here, but I am not sure how to solve this, how we should define that I am also not sure if we should solve such problems in
Actually, the avocado is generic enough that it can support such manifest. It is only about creating a resolver which would understand it and resolve it into test references. Right now we don't have such a resolver, but it shouldn't be a problem to create something like this. The main issue here is to decide which format such manifest should use. |
Hi @dgibson, I'd like to get back to your reproducer and make some observations. Your reproducer looks like: #!/usr/bin/python3
from avocado import Test
class MyTest(Test):
def firsttest(self):
pass
def secondtest(self):
assert False What it doesn't make clear is that it's also an executable file. So, strictly speaking, when you run
I think the current behavior is correct because Avocado should not assume that the This brings us to the topic of manifests. Avocado actually supports both modes:
If you use the strict references (test names), Avocado will behave according to your expectations (IIUC). Running with the test reference that is an executable file gives you:
But running it with the (invalid) "test names", will give you:
And if you combine that with valid references, Avocado will still "error out" on the side of safety:
If you are OK with not resolving all given references, you could provide the
Other possibilities to tweak how the resolver works include:
Finally, I believe the suggestion to point users to |
Actually, I describe both the executable and non-executable cases in the initial comment. The non-executable case is certainly better, though it's not great.
So... I can see that it's correct in the sense that it's consistent, fits with the established model and is difficult to improve upon without causing worse problems. I still think it's a real gotcha behaviour. The problem as originally described here isn't so bad, but as I note in my previous comment, the real gotcha case is breaking some tests and not noticing, because the entire files are silently ignored. This is pretty easy to do if there are common test helpers and you break something in there which in turn breaks some of the tests.
It's certainly closer. Is there a way to use the strict model for an entire testsuite, though, without listing every single test on the command line?
That's pretty much exactly the case I'm least OK with.
So, I have some ideas on this, more details later.
Yes, it's an improvement. Do you want to keep this issue for tracking that fix? Or should we close it since I don't think the underlying gotcha is really fixable. |
This will improve avocado error message when some reference wasn't resolved. It adds a pointer to `avocado -V list` which provides more information about references. Reference: avocado-framework#5644 Signed-off-by: Jan Richter <[email protected]>
Describe the bug
If attempting to run a Python instrumented test which has badly named test methods, the resulting errors are quite cryptic.
Steps to reproduce
Create
test.py
Note that the methods are called
firsttest
andsecondtest
rather than the correcttest
ortest_foo
.Expected behavior
An error message suggesting that there need to be methods named
test_*
.Current behavior
If the file is not executable the rather inscrutable:
If the file is executable, then it is misleadingly executed as though it were a simple test:
When we expect this to have two tests, one of then failing.
System information (please complete the following information):
python3-avocado-92.0-1.fc37.noarch
)The text was updated successfully, but these errors were encountered: