-
-
Notifications
You must be signed in to change notification settings - Fork 21.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update README.md to add Codefactor badge #81823
base: master
Are you sure you want to change the base?
Conversation
Recently, I have seen comments on the internet saying that the code of Godot is pretty bad. Codefactor says otherwise. This badge will show the rating of the code, and upon clicking on it, it will show more details and ratings of each file in the repository. Its free, and major projects like ppy/osu have this badge.
I looked over the checks. Going through arbitrary code checks, because some random people said it's bad, is rather questionable action. Then again, this is assuming we want to do anything about it. It would be useful to know how the code is scored and whether the checks can be customized to only include things that we'd want improve (to prevent random contributors jumping to PRs for the sake of improving some "score"). That's just my thoughts. We used to have LGTM badge; I think adding a similar one isn't a big deal anyway. |
Also I've seen cases where I agree that a number of the checks feel arbitrary, and having a "B" score over arbitrary checks doesn't feel helpful or makes us look good in a sense |
Yeah that was poor reasoning on my part. I have just updated the OP. @KoBeWi At the end of the day this isn't a deep issue, with the badge mainly serving as a reference to out of org developers, and if checks are implemented, mainly serves to help developers with potential code issues. At the very least these were my two main ideas when I decided to open up a PR conerning CF. |
It shouldn't catch any errors, it's just a matter of convention. It's saying that instead of virtual get_name() override; we should just use the equivalent get_name() override; But we explicitly chose to keep both for clarity. When I'm looking for virtual methods I can override, I look at the left side. Having |
But then the badge actually being indicative of real code values is central no? And getting a "B" score for things like "very complex method" or similar gives us pretty mediocre image isn't helpful |
If you don't want to see Codefactor implemented as a check, fine, I understand. As I stated previously, you can treat CF analysis results as suggestions or, like some projects, use it as a requirement to make sure that the code enters the repository clean. Though I feel like this type of discussion would be out of scope of this PR. I can open a feature request in the Godot feature repo about this if you would like. |
I have no strong feelings about that itself, but over what I feel are arbitrary scores making Godot look bad over things that aren't really metrics of good code, does that make sense? What does "very complex method" mean? How is it a negative in itself? Would it really be better to split a method into multiple sub-methods? Would that really improve the code? Some things just can't be done in brief code. And to me at least breaking a large method up into smaller sub-methods unless you're doing it for repeated code hurts readability rather than improves it. And as others have stated the checks seem very arbitrary. And I just don't think it looks good to have a status badge scoring us over things we have no control over (like third party code) or rules that are arbitrary (and don't follow any proscribed standard, virt + override isn't a violation of the standard etc.) (Note that out of our total code of about 17 million lines some 6 million are third party code over which we have no control, but I'm not sure how much that affects the score) Does my concerns make sense at all? |
Okay I get what you are saying now. If you click on the "issue" on the site, it will give reasoning on why it was marked as an issue and will give refactoring suggestions right below it. |
Well the check's description says
Some related stack question (apparently this lines up with C++ guideline): https://stackoverflow.com/questions/39932391/should-i-use-virtual-override-or-both-keywords |
Out of curiousity, Github aquired LGTM and intergrate it's feature to Github Code Scanning, (See here). |
@KoBeWi I strongly suspect some sort of cyclomactic complexity counter |
As an arbitrary rating it's too finnicky to make contributors proud of achieving a high score with the tool. As an analyzer that suggests issues in the project, it gets a lot of things wrong, but at the same time can incentivize a new contributor to follow its guidance and open a PR that the project doesn't need. You are free to use any tools that can help find issues with the Godot codebase, and we often have users publish static analyzer reports as actionable issues. But adding a badge like that would make it look official and endorsed when in fact we give no such recommendation to follow this particular tool's advice. So for these reasons I'm against adding such badges. Thanks for expressing interest in contributing nevertheless! |
I personally disagree with this. The only thing I see that this badge would incentivize is PRs that fix the issues detected. I actually view this as a positive since it will clear up dirty code in the repository.
Codefactor mainly uses CppLint, a tool used and maintained by Google for styling compliance. I highly doubt Google would release a faulty tool that they themselves use. So far, the only "wrong" issue mentioned in this discussion is the fact that the gcc path is incomplete (I'd argue this is correct since GCC may not be defined in $path under certain environments). If you find any incorrect detections feel free to point them out, but in my opinion, no one has mentioned anything that is outright wrong yet. Sure, once in a while something may be flagged incorrectly, but overall I do not see harm in suggestions that are usually correct.
The "arbitrary" detections issue was since resolved upon me explaining that you can click the issue, showing an explanation for the detection and refactoring solution(s). |
I think these tools can have value, but they also don't have to be in the README. I would prefer listing relevant code analysis platforms in the contributors documentation, with explanations on what to make of those results.
Not all issues it detects are things we would want fixed, and much less fixed by a first-time contributor (which is what such a badge would incentivize). We've had our fair share of PRs from first-time contributors "refactoring" a core part of the codebase because a static analyzer pointed at something, and those are usually very difficult to review and ultimately end up rejected because the contributor was not familiar enough with the codebase to do things the right way. For example I wouldn't want a first-time contributor to start refactoring |
This badge will show the rating of the code, and upon clicking on it, it will show more details and ratings of each file in the repository.
It's free, and major projects like ppy/osu have this badge.
<< this is how the badge would appear on the README
(side note, but you can also setup Codefactor as a check, scanning any PRs for any potential code issues)