-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes ImageMetricsTestCase #39044
Fixes ImageMetricsTestCase #39044
Conversation
This fixes the annoying test failure with `org.opentest4j.AssertionFailedError: Expected analysis_results.fields.reflection to be within range [163 +- 3%] but was 168 ==> expected: <true> but was: <false>`
Status for workflow
|
cc @zakkak |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me.
Also, I wonder if these tests even make sense in our current situation... They would make sense if we were able to react promptly to a failure to analyze and propose a fix if necessary, but AFAICS everybody involved (me included) has higher priorities, so... Looks like a losing battle.
@gastaldi where did this show up? Is it in Quarkus' CI? If yes what triggered it? If no you have the option to (and should) disable these tests by setting the
@yrodiere can you clarify what is the current situation? Do you mean that such issues keep popping up or something else?
I can see how this might be frustrating, but it's necessary to avoid ending up with issues like #38683 (which would probably have been caught much earlier if the Some ideas to improve the current situation (as I understand it):
|
I was mostly talking about my own situation TBH: I have too many things on my plate already, so can't really spare cycles on investigating these failures (even though technically I'm "responsible" for these extensions). If you can affort to prioritize this yourself though, I can definitely ping you.
I think you're spot on on the downsides and they probably would make the situation worse, unfortunately... The only zero-downside improvement I can see would be to find a way to notify "code owners" when tests of their "owned" extension starts failing on non-PR builds. That's probably something the bot could handle, based on the already-configured automatic issue labeling, right @gsmet? With something like that I would at least have known of the failure earlier and could have pinged you. Though if we're being honest I failed to ping you even after I noticed the failure, so that would still require more rigor from me. |
A lot of PRs were reporting it, like #39009 (comment) and #39015 (comment), hence why I created it.
Great, I'll revert it if it's already fixed then |
Thanks for the clarification. Unfortunately that's not going to work either. The aim of the tests is to let you (the Quarkus developers) know that your code changes are affecting the native image binaries. Even if I notice the change, in most cases I can't really tell if that's OK or not, e.g. you might introduce some new functionality which totally justifies the increase of the binary size. On the other hand you might just use some new API like in #38820 that will fetch more things. In that case I understand that it's not always trivial (for both of us) to judge whether the increase is welcome or not.
That shouldn't happen though, right? If the tests start failing without first failing in a PR then the change that triggered the failures is external to the Quarkus repository (it's most likely a change in the builder image, so it's an issue for us, the mandrel team). Do you have any examples in mind?
+1 it wouldn't hurt if the bot could ping me on such cases, as long as that doesn't mean I am always "assigned" resolving the issue.
If we are still talking about the specific failures (and not some earlier issue) even though they got ignored in the PR that introduced them, they quickly came to our attention (breaking PR merged on the 20th, issue tracking the issue created on the 21st thanks to @gsmet's ping) and eventually got fixed after a few days (on the 27th). IMO if we/you:
Things shouldn't break or stay broken for long periods of time. |
Failures external to the Quarkus repository are the only one I'm concerned about TBH. If I break stuff in my PR, that PR won't get merged until I find the time to debug that :)
I thought this went under your radar since @gastaldi submitted the fix, but yeah it seems the fix you submitted in parallel addressed the same issue.
Alright, well let's try that :)
I wish we had GitHub teams for that... and not just for the Mandrel team, as it's definitely a problem on other extensions. |
This fixes the annoying test failure with
org.opentest4j.AssertionFailedError: Expected analysis_results.fields.reflection to be within range [163 +- 3%] but was 168 ==> expected: <true> but was: <false>