-
-
Notifications
You must be signed in to change notification settings - Fork 644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bug with linter targets being skipped #10974
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[ci skip-rust] [ci skip-build-wheels]
Eric-Arellano
approved these changes
Oct 16, 2020
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you Greg!
Eric-Arellano
pushed a commit
to Eric-Arellano/pants
that referenced
this pull request
Oct 16, 2020
### Problem We noticed an issue where, when running the `./pants lint` command on a large number of targets in a repository, some targets were being completely skipped by the flake8 process, resulting in the flake8 linter output falsely reporting all good, when there were actually files in the repo with linter errors. The problem turned out to lie in the `group_field_sets_by_constraints` method. This method takes as its input an unsorted collection of field sets corresponding to the input targets, and groups them by their python interpreter contraint. This method is used as part of the pipeline for running the flake8 process on python source files. Internally, this method calls the python standard library `itertools.groupby` method. It turns out that `groupby` does not work as expected with unsorted input data - it generates a new sub-iterable every time the sorting key changes (in this case, the interpreter constraint), rather than creating as many sub-iterables as there were distinct sorting keys in the input data. Because we were taking the output of this method and using it in a dictionary comprehension, we were accidentally overwriting dictionary values in a non-deterministic way, resulting in some filed sets getting skipped before the flake8 process could run on them. ### Solution `group_field_sets_by_constraints` was rewritten to avoid using `itertools.groupby` altogether, so we no longer skip inputs; and a test was added to make sure that we handle unsorted field set inputs to this method correctly.
gshuflin
added a commit
that referenced
this pull request
Oct 16, 2020
### Problem We noticed an issue where, when running the `./pants lint` command on a large number of targets in a repository, some targets were being completely skipped by the flake8 process, resulting in the flake8 linter output falsely reporting all good, when there were actually files in the repo with linter errors. The problem turned out to lie in the `group_field_sets_by_constraints` method. This method takes as its input an unsorted collection of field sets corresponding to the input targets, and groups them by their python interpreter contraint. This method is used as part of the pipeline for running the flake8 process on python source files. Internally, this method calls the python standard library `itertools.groupby` method. It turns out that `groupby` does not work as expected with unsorted input data - it generates a new sub-iterable every time the sorting key changes (in this case, the interpreter constraint), rather than creating as many sub-iterables as there were distinct sorting keys in the input data. Because we were taking the output of this method and using it in a dictionary comprehension, we were accidentally overwriting dictionary values in a non-deterministic way, resulting in some filed sets getting skipped before the flake8 process could run on them. ### Solution `group_field_sets_by_constraints` was rewritten to avoid using `itertools.groupby` altogether, so we no longer skip inputs; and a test was added to make sure that we handle unsorted field set inputs to this method correctly. Co-authored-by: gshuflin <[email protected]>
Eric-Arellano
added a commit
that referenced
this pull request
Oct 16, 2020
We discovered in #10974 that `itertools.groupby()` requires you to pre-sort the data to work properly: From https://docs.python.org/3/library/itertools.html#itertools.groupby: > The operation of groupby() is similar to the uniq filter in Unix. It generates a break or new group every time the value of the key function changes (which is why it is usually necessary to have sorted the data using the same key function). That behavior differs from SQL’s GROUP BY which aggregates common elements regardless of their input order. [ci skip-rust] [ci skip-build-wheels]
Eric-Arellano
added a commit
to Eric-Arellano/pants
that referenced
this pull request
Oct 16, 2020
We discovered in pantsbuild#10974 that `itertools.groupby()` requires you to pre-sort the data to work properly: From https://docs.python.org/3/library/itertools.html#itertools.groupby: > The operation of groupby() is similar to the uniq filter in Unix. It generates a break or new group every time the value of the key function changes (which is why it is usually necessary to have sorted the data using the same key function). That behavior differs from SQL’s GROUP BY which aggregates common elements regardless of their input order. [ci skip-rust] [ci skip-build-wheels]
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
We noticed an issue where, when running the
./pants lint
command on a large number of targets in a repository, some targets were being completely skipped by the flake8 process, resulting in the flake8 linter output falsely reporting all good, when there were actually files in the repo with linter errors.The problem turned out to lie in the
group_field_sets_by_constraints
method. This method takes as its input an unsorted collection of field sets corresponding to the input targets, and groups them by their python interpreter contraint. This method is used as part of the pipeline for running the flake8 process on python source files.Internally, this method calls the python standard library
itertools.groupby
method. It turns out thatgroupby
does not work as expected with unsorted input data - it generates a new sub-iterable every time the sorting key changes (in this case, the interpreter constraint), rather than creating as many sub-iterables as there were distinct sorting keys in the input data. Because we were taking the output of this method and using it in a dictionary comprehension, we were accidentally overwriting dictionary values in a non-deterministic way, resulting in some filed sets getting skipped before the flake8 process could run on them.Solution
group_field_sets_by_constraints
was rewritten to avoid usingitertools.groupby
altogether, so we no longer skip inputs; and a test was added to make sure that we handle unsorted field set inputs to this method correctly.