-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cypress result shows skipped tests as pending #3092
Comments
I can see how this is confusing. It's not necessarily a bug, but a terminology issue. Cypress considers a pending test to be any of the following:
pending tests are tests you don't plan to run and explicitly mark not to run. A skipped test is one that you plan to run but is skipped because, for example, a beforeEach(() => {
throw new Error('whoops')
})
// these will all end up 'skipped' because while you plan to run them, the runner skips them because the beforeEach threw an error
it('a test', () => {})
it('another test', () => {}) I would agree that this is confusing because It looks like the reporter you're using reports pending tests as "skipped", which seems intuitively correct but is not correct as far as Cypress is concerned because "skipped" has a different meaning. All that to say that Cypress isn't really reporting the wrong numbers. It's reporting them correctly as it understands them. That said, it's certainly up for debate whether the current terminology is sufficient and whether it should change to be more intuitive. I imagine it would be a lot of work and perhaps quite difficult to do so, however. Thoughts, @brian-mann? |
@chrisbreiding Incorrect. I want to skip test and have all other tests with "green light" - have information about how many tests from single |
Right, my bad. The first test will be failed, the subsequent ones will be skipped.
Those are pending tests as far as Cypress is concerned, but your reporter is calling them skipped. |
+1 for this being confusing. To the untrained eye, the term "pending" implies that the test is still running or has not been ran yet (as in it is queued to run in the future). Meanwhile the term "skipped" would imply that it was not ran and will not be ran. This is especially confusing because the mocha syntax is My initial thoughts would be to: option 1 option 2 |
Any update on this? I really do prefer the option 1 provided by @morficus. It makes much more sense. If it's just string-swapping it's literally two lines. |
I also vote for option 1. Please, make this happen 🙏 |
I've written an extensive answer in our internal chat detailing the history as to how we arrived at the nomenclature we use. It's long, and complex, and we're in an awkward position because of what The problem is that we inherit all of the mocha reporters and we use the Deviating from this nomenclature would create a conflict from the default spec reporter, or other custom reporters you may use. You'd see the mocha stats summary indicating numbers that don't match up to our own summary at the end. Because we don't control mocha reporters - there's not really much we can do about this. We'd either have to deviate from mocha's own API's - such as rewriting In addition, Mocha has no concept of accurately depicting truly "skipped" tests. That is the one thing that we ourselves do - when a hook fails we skip the remaining tests in that suite. Sure - I agree we could probably rename this state to another word that would perhaps alleviate some confusion. But no matter what - we inherit Mocha's One idea we've batted around would be to expand the number of test states to help describe and more accurately model the actual runtime behavior of each test. A more comprehensive list of test states that could be proposed:
This would enable us to separate out the reason a test didn't run based on the cause. Unfortunately, the upgrade path here would be pretty harsh, as the current definitions look like this:
|
I think most users (me included) would have no issue dealing with the bumpiness of an upgrade in that sense as long as it was well-communicated. Personally, your proposed idea works fine for me. I do see pretty good feedback as far as GitHub thumbs up counts go 😄 |
I like the idea of expanding the number of test states available. |
This looks weird in the UI as well as referenced in the #5366 issue which I closed in favor of this issue. This is the icon for a |
@brian-mann Can you more thoroughly explain what you mean by
Do you mean internally to Cypress development or for users (IE: a possible major version upgrade)? Would it be possible to use a flag to opt into this new test state style so you can push the update to 3.x? You could force the new path in 4.0, which would eliminate the problem of incompatibility from a user perspective. Granted that would add more work on whoever makes the update. |
I actually had a bigger problem because of this issue. |
Is this Issue related to mochajs/mocha#1815? @Saibamen @brian-mann |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Any idea how can I force a test to be marked as skipped then on the suite results? thanks! |
@brian-mann I know this is an old issue, but is there any change in how much effort this would be? There's been a major version release (actually three, haha) since this was last commented on. |
Waiting |
@brian-mann Just because you based your product on mocha doesn't mean you should throw your hands up about fixing issues caused by that choice. When you add a library to your project you need to maintain that library as part of your project. If mocha doesn't do the right thing with respect to accounting test state then either work with mocha to fix it and contribute those changes, or if they don't want them, then hard fork mocha and use the fork. Correctly capturing test state accurately seems like a first responsibility of a test framework. This issue has been around for > 2 years with no progress. |
Hi, I'm currently running cypress 7.7.0 and still have this issue. do you plan to fix it? |
The explanation for the current test statuses: https://docs.cypress.io/guides/core-concepts/writing-and-organizing-tests#Test-statuses |
@bahmutov Thank you for the link. Could the use case |
Cypress - Tagging of test cases (Smoke, Sanity, BVT) -> |
Does cypress-grep work with cucumber feature files? If yes could you please share an example? If not, could do you know any alternative modules which work with cucumber feature files? |
Any update on this issue? |
Any update on this issue being fixed? If not atleast provide more details when the test is considered as skipped and pending? |
Open source contribution... I'm gonna drop my unique perspective here. Problem 1, which everyone has commented on: the mismatch between code ( Problem 2, which nobody has commented on: the source of the decision. IMO all of the following terms are useless because they don't convey any information about whether the action was done intentionally by a human or algorithmically by a computer:
Only these two suggested terms imply a decision by computer:
And no term at all above implies a clearly human decision. I don't have a clear solution or suggestion, except that if this project will go through the trouble of renaming well-known terms, it should do so with maximum benefit by choosing unambiguous terms. |
Cypress results (with
run
command) shows skipped tests as pendingCurrent behavior:
Result file: (see
skipped="1"
inMocha Tests
)Desired behavior:
Cypress results should show 0 pending and 1 skipped tests
Steps to reproduce: (app code and test code)
cypress.json:
test:
Note
This same issue when running:
Versions
Cypress 3.1.4, Chrome 71, Windows 10
The text was updated successfully, but these errors were encountered: