Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate CI Job to run Pytest collection check #29923

Merged

Conversation

potiuk
Copy link
Member

@potiuk potiuk commented Mar 5, 2023

Before we attempt to run tests in parallel, we quickly check once if Pytest collection works. This is in order to avoid costly parallel test execution if that makes no sense to initialize all the parallel machines. This check used to be done in "Wait for CI Inages" step, but running it there has the undesireable side effect that it is not obvious that it's the collection that fails, also it prevents other jobs (for example static checks and docs building) from running. This means that the contributor does not get all the feedback that could be given immediately.

This PR separates the collection into separate job and only makes "test" jobs depend on it - all the other jobs that need CI image depend on "wait for CI image" one and should continue running even if pytest collection fails.


^ Add meaningful description above

Read the Pull Request Guidelines for more information.
In case of fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in a newsfragment file, named {pr_number}.significant.rst or {issue_number}.significant.rst, in newsfragments.

@potiuk
Copy link
Member Author

potiuk commented Mar 5, 2023

This one will be very helpful when things like "new dependency release" breaking test imports - only real unit tests will be affected and it will be clear that test collection is the culprit if that's the case :)

Before we attempt to run tests in parallel, we quickly check once
if Pytest collection works. This is in order to avoid costly
parallel test execution if that makes no sense to initialize all
the parallel machines. This check used to be done in "Wait for
CI Inages" step, but running it there has the undesireable
side effect that it is not obvious that it's the collection
that fails, also it prevents other jobs (for example
static checks and docs building) from running. This means that
the contributor does not get all the feedback that could be
given immediately.

This PR separates the collection into separate job and only
makes "test" jobs depend on it - all the other jobs that need
CI image depend on "wait for CI image" one and should continue
running even if pytest collection fails.

CI diagrams are also updated to reflect a bit better optionality
and parallelism of the CI jobs.
@potiuk potiuk force-pushed the separate-job-for-testing-pytest-collection branch from d2e37de to 5642dc1 Compare March 5, 2023 08:16
@potiuk
Copy link
Member Author

potiuk commented Mar 5, 2023

I've also updated the mermaid CI diagrams - you can take a look @eladkal - how diagram modification PR looks llke when you get embedded mermaid diagram (when you click "display rich diff" you are able to compare the before and after diagrams visually.

Screenshot 2023-03-05 at 09 18 56

@potiuk potiuk merged commit 30b2e6c into apache:main Mar 5, 2023
@potiuk potiuk deleted the separate-job-for-testing-pytest-collection branch March 5, 2023 10:26
@o-nikolas
Copy link
Contributor

Love this one 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants