-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CI job to run on real OSS projects #1025
Comments
I really like this idea! 💯 I only wish CI was faster to handle this as well as it deserves 😬 |
We could even make the report with checkboxes so we can mark if the violation is a valid one or not ✅ |
Could have a "Regressions" section too, which would obviously be more important to address than say a valid violation of an unreleased rule. |
We could run the projects against a stable SwiftLint version and the current one. Then we can flag the violations that only happen in the new version (and are from a rule that is also in the stable version) as potential regressions. I say potential because we could been improving an existing rule to flag more cases (such as |
If I recall correctly, that's the approach that @scottrhoyt had started to take when working on performance tests (see #376 (comment)). Architecturally, I think that's slightly the wrong approach, and that it'd be preferable to compare But generally, we should try to reuse the same approach for building out both performance tests and OSS checks, since I think they can share some infrastructure in common. |
My pointing on using the stable one is that we can mark as regressions just the unreleased rules. |
This only really works if the project being linted has zero SwiftLint violations, which won't always be the case. I'd rather filter out pre-existing violations, so that PRs are really tested in isolation. Otherwise, imagine if we knowingly incur a violation in a PR. That will show up in every subsequent PR, even if the change is unrelated. |
Fair enough 👍 |
I have an idea on how to build this efficiently-ish! I can set up an S3 bucket that Travis can write to. When Travis builds on
Then on PRs, we also generate these reports but don't push them to S3, and diff them against the most recent commit in the branch that has reports available. That way, we don't have to run SwiftLint twice on every CI job. We could leverage this same system for performance tests. |
This is nice, but being honest I don't know if it's worth the complexity. In general, linting the files is fast comparing to building and cloning the projects. It could be more interesting for performance though, as we'd keep a history. |
That's true. And in most cases, incremental compilation should help minimize the time spent building two different versions of SwiftLint... |
On a second thought, compiling master could be a bit tricky because the CI machine could not have the latest How crazy would be storing the reports in a git repo? 😳 |
Not sure what you mean here. The CI workers have the git repo, so you can just do a
More or less equivalent to the idea of storing them in S3. Either way, you need some sort of write access to somewhere. S3 has the advantage of being files that can easily be deleted after a certain age (if we want). What's in git always sticks around in git history, which could lead to large repos. |
The issue was that some CI providers use a shallow copy, so you'd need to run |
First take at this done in #1056. It's by no means perfect, it'd be nice to be able to run it locally (without mangling the git environment) and to improve the output in GitHub comments, better linking (quite fragile at the moment), more projects, etc. But we can track these as separate issues if we find that they'd be valuable in the future. |
We've already seen in a handful of PRs recently that running on Realm Swift was beneficial in highlighting false positives.
There are a number of other open source projects that use SwiftLint that we could run new rules and bug fixes against, to help minimize the number of false positives.
If we do this, these integration tests should never fail a build, and would ideally follow in Danger/Codecov's approach of posting a constantly-updating GitHub comment reporting the violations in these projects. (
GitHubReporter
anyone?)Proactive contributors could also "fix" these style violations in these OSS projects! Everybody wins 😄.
The text was updated successfully, but these errors were encountered: