Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve message when resolvers don't resolve the reference #5644

Closed
Tracked by #5656
dgibson opened this issue Apr 12, 2023 · 9 comments · Fixed by #5696
Closed
Tracked by #5656

Improve message when resolvers don't resolve the reference #5644

dgibson opened this issue Apr 12, 2023 · 9 comments · Fixed by #5696
Assignees
Labels
customer:Passt Requirements/issues raised by the Passt project enhancement

Comments

@dgibson
Copy link
Contributor

dgibson commented Apr 12, 2023

Describe the bug

If attempting to run a Python instrumented test which has badly named test methods, the resulting errors are quite cryptic.

Steps to reproduce

Create test.py

#! /usr/bin/python3

from avocado import Test

class MyTest(Test):
    def firsttest(self):
        pass

    def secondtest(self):
        assert False

Note that the methods are called firsttest and secondtest rather than the correct test or test_foo.

Expected behavior

An error message suggesting that there need to be methods named test_*.

Current behavior

If the file is not executable the rather inscrutable:

$ avocado run test.py
Could not resolve references: test.py

If the file is executable, then it is misleadingly executed as though it were a simple test:

$ avocado run ./test.py
JOB ID     : c520a784f520c52870669e29d5b8fc88fb6cf569
JOB LOG    : /home/dwg/avocado/job-results/job-2023-04-12T14.23-c520a78/job.log
 (1/1) ./test.py: STARTED
 (1/1) ./test.py: PASS (0.23 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 3.91 s

When we expect this to have two tests, one of then failing.

System information (please complete the following information):

  • OS: Fedora 37
  • Avocado version: Avocado 92.0
  • Avocado installation method: rpm (python3-avocado-92.0-1.fc37.noarch)
@dgibson dgibson added the bug label Apr 12, 2023
@mr-avocado mr-avocado bot moved this to Triage in Default project Apr 12, 2023
@dgibson
Copy link
Contributor Author

dgibson commented Apr 12, 2023

Seems like there's a similarly cryptic error if an instrumented test fails to compile.

@richtja richtja self-assigned this Apr 12, 2023
@richtja
Copy link
Contributor

richtja commented Apr 14, 2023

Hi @dgibson thanks for reporting this. You are right that we should find a way how to add more information when the test wasn't resolved, then Could not resolve references: test.py. Maybe when the reference is not resolved, we can add a reason from all resolvers why it wasn't resolved. This might help users to debug possible problems.

For the second part of your issue, when the file is executable, It is a little bit tricky and I understand your confusion. But IMO it is a correct behavior, because of badly named test methods the file can't be resolved as avocado-instrumented. Therefore, avocado will try other resolvers and since it is an executable it will be resolved as exec-test and it will be run as executable. In this case, unfortunately, I can't see how we can find out that this was a mistake and the user doesn't want to run it as exec-test.

@richtja richtja added enhancement and removed bug labels Apr 19, 2023
@richtja richtja changed the title Cryptic message attempting to run Python tests with badly named methods Improve message when resolers don't resolve the reference Apr 19, 2023
@richtja richtja moved this from Triage to Long Term (Next Q) Backlog in Default project Apr 19, 2023
@richtja
Copy link
Contributor

richtja commented Apr 19, 2023

We can change the message, the error message from:

$ avocado run test.py
Could not resolve references: test.py

to

$ avocado run test.py
Could not resolve references: test.py
For more information please use avocado -V list test.py

This way, we will point users to a list option which can provide more information about resolving references.

avocado -V list test.py
Type Test Tag(s)

Resolver             Reference          Info
avocado-instrumented /tmp/wrong_test.py
golang               /tmp/wrong_test.py go binary not found
python-unittest      /tmp/wrong_test.py
robot                /tmp/wrong_test.py File "/tmp/wrong_test.py" does not end with ".robot"
exec-test            /tmp/wrong_test.py File "/tmp/wrong_test.py" does not exist or is not executable
tap                  /tmp/wrong_test.py File "/tmp/wrong_test.py" does not exist or is not executable

TEST TYPES SUMMARY
==================

@dgibson
Copy link
Contributor Author

dgibson commented May 14, 2023

$ avocado run test.py
Could not resolve references: test.py
For more information please use avocado -V list test.py

That would be an improvement. I think it's still worth considering a stronger warning if avocado scans a Python file which has classes derived from unittest.Test or avocado.Test but still contains no actual testcases.

I realized that the case where you explicitly point to a file which doesn't resolve is a but confusing, but a much more dangerous version of this problem occurs when (for example) pointing at a whole directory of (potential) tests:

A syntax error, or certain top-level content errors in the Python can prevent anything from resolving in the file. If running a whole directory of tests, it may not be obvious to the user that a batch of tests are now being omitted entirely, because the file containing them isn't resolving any more. I've been caught by this several times in practice while actively developing tests already.

@richtja
Copy link
Contributor

richtja commented May 15, 2023

I realized that the case where you explicitly point to a file which doesn't resolve is a but confusing, but a much more dangerous version of this problem occurs when (for example) pointing at a whole directory of (potential) tests:

A syntax error, or certain top-level content errors in the Python can prevent anything from resolving in the file. If running a whole directory of tests, it may not be obvious to the user that a batch of tests are now being omitted entirely, because the file containing them isn't resolving any more. I've been caught by this several times in practice while actively developing tests already.

For such cases, you should use avocado list command, to see how avocado resolving files in directory. I am not sure if we can provide detailed information about test resolution during the test run, because avocado tries to be as generic as possible in case of test resolution. Basically anything can be a test if you have resolver which understand it and can resolve it. Therefore, avocado can't decide if the test resolution was correct or not.

@dgibson
Copy link
Contributor Author

dgibson commented May 15, 2023

I realized that the case where you explicitly point to a file which doesn't resolve is a but confusing, but a much more dangerous version of this problem occurs when (for example) pointing at a whole directory of (potential) tests:
A syntax error, or certain top-level content errors in the Python can prevent anything from resolving in the file. If running a whole directory of tests, it may not be obvious to the user that a batch of tests are now being omitted entirely, because the file containing them isn't resolving any more. I've been caught by this several times in practice while actively developing tests already.

For such cases, you should use avocado list command, to see how avocado resolving files in directory.

That's missing the point. The issue here is that it's very easy to not even realize that some of the tests are broken. Once you realize that it's straightforward enough to figure out why using avocado list and other things, but that's too late. What I'd be after is some kind of warning if avocado encounters a file that looks loosely like it should contain instrumented tests, but in which no tests can be resolved (e.g. because of a syntax error).

Having encountered this, I've realized I don't actually very much like the approach used by Avocado, amongst others, of auto-detecting tests, rather than having an explicit test manifest of some sort.

I am not sure if we can provide detailed information about test resolution during the test run, because avocado tries to be as generic as possible in case of test resolution. Basically anything can be a test if you have resolver which understand it and can resolve it. Therefore, avocado can't decide if the test resolution was correct or not.

@richtja
Copy link
Contributor

richtja commented May 16, 2023

I realized that the case where you explicitly point to a file which doesn't resolve is a but confusing, but a much more dangerous version of this problem occurs when (for example) pointing at a whole directory of (potential) tests:
A syntax error, or certain top-level content errors in the Python can prevent anything from resolving in the file. If running a whole directory of tests, it may not be obvious to the user that a batch of tests are now being omitted entirely, because the file containing them isn't resolving any more. I've been caught by this several times in practice while actively developing tests already.

For such cases, you should use avocado list command, to see how avocado resolving files in directory.

That's missing the point. The issue here is that it's very easy to not even realize that some of the tests are broken. Once you realize that it's straightforward enough to figure out why using avocado list and other things, but that's too late.

IMO users should use avocado list while developing the tests, to find out if the tests are not broken before you even try to run them.

What I'd be after is some kind of warning if avocado encounters a file that looks loosely like it should contain instrumented tests, but in which no tests can be resolved (e.g. because of a syntax error).

I understand your problem here, but I am not sure how to solve this, how we should define that the file looks loosely like it should contain instrumented tests to create such warning. Because maybe such file wasn't meant as instrumented test but only as python test or even something else, and it was resolved as that. IMO, such warnings will create a lot of false positive massages and it will create avocado output more unreadable.

I am also not sure if we should solve such problems in avocado run when we have avocado list for dealing with such issues.

Having encountered this, I've realized I don't actually very much like the approach used by Avocado, amongst others, of auto-detecting tests, rather than having an explicit test manifest of some sort.

Actually, the avocado is generic enough that it can support such manifest. It is only about creating a resolver which would understand it and resolve it into test references. Right now we don't have such a resolver, but it shouldn't be a problem to create something like this. The main issue here is to decide which format such manifest should use.

@clebergnu
Copy link
Contributor

Hi @dgibson, I'd like to get back to your reproducer and make some observations. Your reproducer looks like:

#!/usr/bin/python3

from avocado import Test

class MyTest(Test):
    def firsttest(self):
        pass

    def secondtest(self):
        assert False

What it doesn't make clear is that it's also an executable file. So, strictly speaking, when you run avocado run ./test.py, it's resolving a reference correctly, but resolving it as an exec-test. Had it not been an executable file, the following would happen:

$ avocado run ./test.py
Could not resolve references: ./test.py

I think the current behavior is correct because Avocado should not assume that the avocado module is indeed Avocado's, that the Test class is indeed Avocado's, and that the user somehow failed to name the test methods correctly.

This brings us to the topic of manifests. Avocado actually supports both modes:

  1. resolving tests with more general references -- such as a path to a file
  2. resolving test with strict references that are also test names -- such as the full test reference including path to a file, class and method name

If you use the strict references (test names), Avocado will behave according to your expectations (IIUC). Running with the test reference that is an executable file gives you:

$ avocado run ./test.py 
JOB ID     : 1816e3d0a433cf0f19616c8c320d6af035a0ca02
JOB LOG    : /home/cleber/avocado/job-results/job-2023-05-23T14.08-1816e3d/job.log
 (1/1) ./test.py: STARTED
 (1/1) ./test.py:  PASS (0.42 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /home/cleber/avocado/job-results/job-2023-05-23T14.08-1816e3d/results.html
JOB TIME   : 3.09 s

But running it with the (invalid) "test names", will give you:

$ avocado run ./test.py:MyTest.firsttest ./test.py:MyTest.secondtest 
Could not resolve references: ./test.py:MyTest.firsttest,./test.py:MyTest.secondtest

And if you combine that with valid references, Avocado will still "error out" on the side of safety:

$ avocado run ./test.py:MyTest.firsttest ./test.py:MyTest.secondtest /bin/true 
Could not resolve references: ./test.py:MyTest.firsttest,./test.py:MyTest.secondtest

If you are OK with not resolving all given references, you could provide the --ignore-missing-references command line option:

$ avocado run --ignore-missing-references ./test.py:MyTest.firsttest ./test.py:MyTest.secondtest /bin/true 
JOB ID     : d8ed541a4910de2ef0a8f3cdbcb9909d986e84b6
JOB LOG    : /home/cleber/avocado/job-results/job-2023-05-23T14.12-d8ed541/job.log
 (1/1) /bin/true: STARTED
 (1/1) /bin/true:  PASS (0.01 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /home/cleber/avocado/job-results/job-2023-05-23T14.12-d8ed541/results.html
JOB TIME   : 2.72 s

Other possibilities to tweak how the resolver works include:

  1. creating custom resolvers or discoverer plugins
  2. disabling some resolver/discoverer plugins
  3. changing the priority order of resolver/discoverer plugins

Finally, I believe the suggestion to point users to avocado list is indeed a very good one.

@dgibson dgibson changed the title Improve message when resolers don't resolve the reference Improve message when resolvers don't resolve the reference May 24, 2023
@dgibson
Copy link
Contributor Author

dgibson commented May 25, 2023

Hi @dgibson, I'd like to get back to your reproducer and make some observations. Your reproducer looks like:

#!/usr/bin/python3

from avocado import Test

class MyTest(Test):
    def firsttest(self):
        pass

    def secondtest(self):
        assert False

What it doesn't make clear is that it's also an executable file. So, strictly speaking, when you run avocado run ./test.py, it's resolving a reference correctly, but resolving it as an exec-test. Had it not been an executable file, the following would happen:

Actually, I describe both the executable and non-executable cases in the initial comment. The non-executable case is certainly better, though it's not great.

$ avocado run ./test.py
Could not resolve references: ./test.py

I think the current behavior is correct because Avocado should not assume that the avocado module is indeed Avocado's, that the Test class is indeed Avocado's, and that the user somehow failed to name the test methods correctly.

So... I can see that it's correct in the sense that it's consistent, fits with the established model and is difficult to improve upon without causing worse problems. I still think it's a real gotcha behaviour. The problem as originally described here isn't so bad, but as I note in my previous comment, the real gotcha case is breaking some tests and not noticing, because the entire files are silently ignored. This is pretty easy to do if there are common test helpers and you break something in there which in turn breaks some of the tests.

This brings us to the topic of manifests. Avocado actually supports both modes:

1. resolving tests with more [general references](https://avocado-framework.readthedocs.io/en/101.0/guides/user/chapters/concepts.html#test-references) -- such as a path to a file

2. resolving test with strict references that are also [test names](https://avocado-framework.readthedocs.io/en/101.0/guides/user/chapters/concepts.html#test-name) -- such as the full test reference including path to a file, class and method name

If you use the strict references (test names), Avocado will behave according to your expectations (IIUC). Running with the test reference that is an executable file gives you:

It's certainly closer. Is there a way to use the strict model for an entire testsuite, though, without listing every single test on the command line?

If you are OK with not resolving all given references, you could provide the --ignore-missing-references command line option:

That's pretty much exactly the case I'm least OK with.

Other possibilities to tweak how the resolver works include:

1. creating custom resolvers or discoverer plugins

2. disabling some resolver/discoverer plugins

3. changing the priority order of resolver/discoverer plugins

So, I have some ideas on this, more details later.

Finally, I believe the suggestion to point users to avocado list is indeed a very good one.

Yes, it's an improvement. Do you want to keep this issue for tracking that fix? Or should we close it since I don't think the underlying gotcha is really fixable.

richtja added a commit to richtja/avocado that referenced this issue May 30, 2023
This will improve avocado error message when some reference wasn't
resolved. It adds a pointer to `avocado -V list` which provides more
information about references.

Reference: avocado-framework#5644
Signed-off-by: Jan Richter <[email protected]>
@richtja richtja linked a pull request May 30, 2023 that will close this issue
@richtja richtja added the customer:Passt Requirements/issues raised by the Passt project label May 30, 2023
@github-project-automation github-project-automation bot moved this from Long Term (Next Q) Backlog to Done 101 in Default project Jun 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
customer:Passt Requirements/issues raised by the Passt project enhancement
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

3 participants