-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] How to get the test results like pass/fail and use it in the afterEach hook #27613
Comments
Using the same page instance after a test is failing is not supported out of the box, since when a test is failing a worker will restart and by that you will get a new browser instance. Playwright is about test isolation, so each test should be independent, so you can ensure that a following test will pass. In your case a previous test could leave some bad state, unexpected page URL, local storage, session storage, cookies etc. What is the reasoning you want to re-use this page? You can get around this worker re-creation logic per file level using "serial tests" see here. |
1 -> Every time a new browser opens and closes for each test case may be chances of taking more time (actually taking) and test case failure. Even I am Okay to open a new browser instance if any failure happens. 2 - > How to use Hooks globally for each test/spec file. If I am going to use this suggestion, How to avoid writing Hooks code every time in each test/spec file? I would suggest you - give some options to use the Page fixture in the BeforeAll Hook - So we can use based on our requirements. If that is not possible- Please give me some solution for ###2 requirement. |
why does that not work for you? Once the actual test execution started you only have it on the worker level (aka. worker fixtures), or on the file level (aka. beforeAll). This is because you can have multiple workers. |
Even perfectly okay if the same browser windows are being used entire spec/test file level(aka, BeforeAll, Also okay for opening/Closing the browser if any failure happens - Currently working like this ). But when we use the global level, the browser closes as usual like each test level opens and closes the browser. Ideal intention all cases in the spec file should use the same browser instance, so I don't want to spare time in opening and closing the browser - Here I can get the test results much faster. |
Or - pls give a solution for common beforeAll and AfterAll for all spec files. So I can write my own hooks implementation for this usecase. |
In the default mode all tests in the same file will reuse the same browser instance without restarts, unless there are failures. See this page for information about how beforeAll/afterAll hooks behave in case of failures. If you are not satisfied with the behavior of the default browser fixture, you can write your own which would have different lifetime of the browser. If you want a particular new mode of the hooks, feel free to open a new issue with the detailed description of the expected functionality.
You can define worker-scoped fixtures whose lifetime is bound to test worker and would allow you to reuse the same browser instance across multiple tests/test files. Note that builtin We are not planning to change the behavior and not restart worker for failing tests for isolation reasons. And in general you should not be optimizing for the error scenarios as they should be rather an exception and it should not matter performance wise if the browser is restarted. |
Right now the requirement is - if any test fails, Instead of closing and opening a browser, I just want to use the same browser instance and before starting the next test, Just need to refresh the page without any issue (So I can use the same login - I know people can suggest me to use Storage state). But My cases should use the same browser instance even test fails, ideally browser should not close.
How to hook the 'beforeAll' with all test/spec files. I want to use the same 'beforeAll' hook (should not be a Global setup/teardown mechanism)
Kindly suggest
The text was updated successfully, but these errors were encountered: