Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase binary size baseline for postgresql IT with Mandrel 23.0 #38647

Merged

Conversation

zakkak
Copy link
Contributor

@zakkak zakkak commented Feb 7, 2024

CPU updates seem to have a small effect on the binary size (~0.6%
increase).

The new baseline is based on a build with the mandrel builder image with
tag 23.0.3.0-Final-java17-2024-02-04

Closes #38563

CPU updates seem to have a small effect on the binary size (~0.6%
increase).

The new baseline is based on a build with the mandrel builder image with
tag `23.0.3.0-Final-java17-2024-02-04`
@quarkus-bot quarkus-bot bot added area/hibernate-orm Hibernate ORM area/persistence OBSOLETE, DO NOT USE labels Feb 7, 2024
Copy link

quarkus-bot bot commented Feb 7, 2024

/cc @gsmet (hibernate-orm), @yrodiere (hibernate-orm)

@yrodiere
Copy link
Member

yrodiere commented Feb 7, 2024

Hey. Seems this is happening every other month... you might want to increase the tolerance?

Copy link

quarkus-bot bot commented Feb 7, 2024

Status for workflow Quarkus CI

This is the status report for running Quarkus CI on commit 5253dd7.

✅ The latest workflow run for the pull request has completed successfully.

It should be safe to merge provided you have a look at the other checks in the summary.

You can consult the Develocity build scans.

@zakkak
Copy link
Contributor Author

zakkak commented Feb 7, 2024

Hey. Seems this is happening every other month... you might want to increase the tolerance?

The first changes were due to insufficient testing (this kind of testing was freshly introduced and didn't cover all corner cases). This particular one is related to a CPU release (such releases happen once every 3 months) and might indeed lead to a recurring thing.

Right now the threshold is set to 3% I am OK with increasing it to 5% but I would prefer someone more involved in performance testing to take that decision.

The largest the threshold the latest we will notice about a performance degradation.

It is also worth noting that in this case we didn't need to bump the baseline because we had a 3% increase from the baseline, but because of graalvm/mandrel#654 which results in tests run with the Red Hat build of Mandrel to hit the threshold. Perhaps we should use a different baseline for Red Hat builds, but that's not trivial as Quarkus currently doesn't distinguish them.

cc @franz1981 @rsvoboda

@jedla97
Copy link
Contributor

jedla97 commented Feb 7, 2024

From QE perspective we testing it regularly with RH build of mandrel so we are probably only/first one who hitting this.

Perhaps we should use a different baseline for Red Hat builds, but that's not trivial as Quarkus currently doesn't distinguish them.

Probably it would be possible to distinguish it if -Dquarkus.native.builder-image=registry.access.redhat.com/quarkus/mandrel-* is used (we using it while testing) or the quarkus.native.builder-image property is used with RH mandrel.

Maybe proposal for this is also have these metric/performance test as standalone profile and run them standalone weekly or when new version of mandrel/graal is released.

@gsmet gsmet merged commit 5676145 into quarkusio:main Feb 7, 2024
19 checks passed
@quarkus-bot quarkus-bot bot added this to the 3.9 - main milestone Feb 7, 2024
@gsmet
Copy link
Member

gsmet commented Feb 7, 2024

Let's merge this for now, improvements can be made in a follow-up PR.

@zakkak zakkak deleted the 2024-02-07-update-imagemetrics-baseline branch February 7, 2024 17:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The ImageMetricsITCase failing with RH Mandrel because postgres is bigger than expected
4 participants