-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build and publish pydeephaven-ticking wheels #5290
Build and publish pydeephaven-ticking wheels #5290
Conversation
This does not upload the wheels to PyPi; our first release of pydeephaven-ticking to PyPi should be manual, after which point we can create the necessary project tokens and automate the final steps. Prerequisite for deephaven#5288
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not accounting for what it builds, but it does build. And the workflow looks right. And I see the artifact in the appropriate build directory. (Though the wheel file name is crazy long.)
This unconditionally adds all wheels as part of the build / test process in CI. This may or may not be appropriate. I would suggest the reasonable alternative is to stick with 1 wheel / test except for nightly / release process. If we want to go this route, it will involve some new build configuration logic. |
This is currently failing with
We may need to break up the monolithic build process (all wheels in one docker image) into separate build processes. |
py/client-ticking/build.gradle
Outdated
'ubi-minimal') | ||
def isCi = System.getenv().getOrDefault("CI", "false") == "true" | ||
|
||
def assembleWheelsSet = isCi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this logic inverted?
It sounds the "true" case for "Is this CI" has more cases than the "false" case, where I would expect the opposite.
Or rather, what CI? github check on every PR CI? build artifacts for release CI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As written, this will cause any code that is executing in "CI" to have the full wheel set. If we want to limit it, to say "only build (/test) all of the wheels during the nightly build checks, or during release" there is some additional configuration logic we'll need to develop. That isn't something we've sliced-n-diced based on before.
There are errors getting this to run in CI; it appears that the size of the pydeephaven-ticking images cause the runner to run out of disk space. The standard runners have 14GB of disk space, https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories. The pydeephaven-ticking images total ~5.5GB:
, so it's not surprising that could be bump us past our limits. We may need to investigate alternative strategies for building and testing these wheels. (May be as simple as deleting the docker image after the gradle task is done?) |
Moved back to draft to acknowledge the issues with the current approach. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
…ve logic into the entrypoints.
This is back up for review. The ticking wheel build process doesn't actually depend on pydeephaven; so removing pydeephaven and the transitive dependencies (arrow, etc) from that process is a big win in terms of disk space. The majority of disk-space taking logic has been moved into the entrypoints, which means the logic is executed at the startup of the container and isn't persisted to a docker image layer. The container still has some sort of overlay filesystem, but it gets cleaned up after the container is deleted (after the relevant output files can be copied out). This should not adversely effect cacheability, as gradle should still be caching the output as long as none of the inputs have changed. There is room a lot of room for maintainability, build, and testing improvements, but that can be left for a later date. I think there is good potential to consider these sorts of future improvements with respect to #3723 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quick pass mostly focusing on the gradle wiring.
Why not go all out and split the tasks, so we can omit the rm
step, just let the overlayfs be deleted?
Fixes #5288