-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Not exception but warning if some (not all) of the given test names are not found in suite files. #2897
Comments
@wenijinew what should the execution result be in the case wherein some tests were missing? Should testng say passed and then leave it to the user to look at the logs to find out if some tests were not run because they were missing? What if the user doesnt look at the logs or if the log levels were not set properly and so the warnings got hidden away. Also when running in CI mode how would a user know if everything ran or a few ran. Please share your thoughts around these |
Maybe execution result will not display anything about those missed tests. Having a warning in the console is enough as one notification to user. When user found that the execution result doesn't include those missed tests. He needs to find reason by himself. I personally never use -testnames in non-CLI/CI mode. And in production environment (mostly continuous integration/CI test), only command line is used. It's rare to use -testnames in any GUI environment, in my eyes. So, this feature is to try-the-best-effort to run the test suite as much as possible. It already helps user to some degree. No need to be more perfect. By the way, it could be an switch feature and allow user to enable it by some JVM argument or command line options. |
Is it possible to set test result for non-existing tests? |
How would a user know that there were some tests were not executed because they were missing ? Logs as a mechanism is fine, but TestNG uses the slf4j api to log messages, which means that we now rely on the user to provide an implementation for the binding and also configure the logging such that TestNG originated messages are visible. Else its not going to be visible and the user wouldn't know This kind of behaviour can add more to the user support/queries because TestNG is doing something that is NOT intuitive here by trying to accommodate the mistakes/misses from the user. Personally speaking I dont see value in having this because its better to fail fast and tell the user that they are missing test names than running stuff partially and then letting the user sift through logs to find out what got executed and what didn't. Why do you feel that a user fixing their mistake by fixing their testnames is a not so good user experience? If there's something wrong, wouldnt you want to know about it up front ? Just trying to get your perspective
The intent behind the questions is NOT about something being perfect or not. The intent here is to ensure that we know all the questions and their answers before we introduce something into TestNG so that it does not become a user support nightmare at a later point in time.
Sure. That would go without saying because we donot want to alter the default behaviour of TestNG that our users are used to.
No you cannot. Atleast to the best of my knowledge |
OK, I didn't provide more background to raise this feature. Now, we are working machine learning project which can give recommendations on test cases to run based on product code changes. However, ML is based on historical data and the test cases might be changed over time. If some of the ML's recommended test cases were changed or removed, then the test based on the recommended test cases would fail directly without running any existing test cases, which is not a good user experience. Therefore, those missed test cases are not a real problem for this case, because they are expected to be missed due to changes over time. However, the existent test cases are expected to run still. There will be no fix for missed test cases. It just needs one warning message to tell which test cases were not in the suite files anymore. Over time, machine learning can gradually throw away those outdated test cases and use the changed test cases in the recommended test case list. |
I totally agree and understand your concerns. That's why we can make it one configurable feature to allow users to be able to decide how to handle this situation. |
As an automation test framework, robustness is one of the most important features. If it won't exit or crash as long as the input can be run, whether fully or partially, then it will be more user-friendly. And in my eyes, that's a good user experience.
I partially agree. If it's the users' intention to hide logs, then even errors or exceptions could be hidden. However, I don't think anybody would do that. In a production environment, people want to expose unexpected behaviour as much as possible. For our case as an example, we normally set the logging level as DEBUG. It will save much time to reproduce the failures when we have enough logging information to troubleshoot when the test failed. |
@wenijinew - Just explicitly confirming.. Are you talking about So in your case, you are saying that there would be a suite file that would contain a growing list of |
There's a difference between a user explicitly doing it and it happening without their knowledge. Not all folks are well versed with logs and logging and so I would not vote for an implementation that relies on a user's knowledge about logging to be able to decipher the failure.
There are multiple ways of looking at what constitutes a good user experience. Sweeping issues under the carpet and trying to do whatever is possible to run tests even in partial manner to me is not a good thing that a test runner should do. A framework that consumes TestNG as a test runner can do that. |
This is about the logging part. So, how TestNG handle any kind of warnings to users?
If this is a configurable feature then it's not "trying to do whatever is possible to run tests" but based on users' intentions. That's totally different from doing whatever is possible without the user's knowledge. |
All said and done, since you are already working on something that is a possible use case for this request, would you be willing to also please help raise a PR for this? We can help with getting it vetted out |
For now, it's "foo" in
Maybe "growing" or "changing". Yes, the ML model is to detect the correlation between product code changes and the failed test cases in the past, then give recommended test cases to run for newly changed product codes. |
Is there any guide to raising a PR? Thanks. |
Please check if this helps https://github.com/cbeust/testng/blob/master/.github/CONTRIBUTING.md |
If PR is "Pull Request", then I know. |
Closes #2897 1. Add new boolean option '-ignoreMissedTestNames' to work with the option '-testnames'. 2. When -testnames is given, and '-ignoreMissedTestNames true' is also given, then in case any missed test names not found in the suite, only warning message will be printed, TestNG will continue to run other test names which are existing in the suite. 3. Users who are going to use the new option '-ignoreMissedTestNames' should be aware of that the logging level should be properly configured to make sure the warning message is visible in output or console, rather than missed the notification of the missed test names, if any.
Now, when any of the test names are not found in suite files, TestNG will throw exception.
The test(s) <[TestA, TestB, TestC]> cannot be found in suite.
It's reasonable but not so friendly still. For example, if user provide 5 test names and 2 of them cannot be found in the suite files, other 3 tests could be still run.
In this case, it will give user good experience if TestNG print warning message to tell which tests cannot be found and print info message to tell which tests have been found and will be executed.
What do you think?
The text was updated successfully, but these errors were encountered: