-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add podman support alongside docker for dev #1135
[WIP] Add podman support alongside docker for dev #1135
Conversation
Note that this gets polis running, but as mentioned in my followup to #1060, it doesn't totally solve the dev workflow questions I have with the repl. Additionally, this still uses docker-compose, and just switches out docker for podman internally -- it turns out that podman is now compatible with docker-compose which is nice, so kubernetes yaml is not immediately necessary for a basic development workflow without docker desktop! |
Sorry for the churn. Old instructions required a bunch of virtualbox etc etc, but apparently as of two days ago it uses qemu under the hood which dramatically simplifies things. Eventually all the socket business should get streamlined under the hood too, but for now this works. |
Note that the revised simplified instructions that sidestep virtualbox successfully build the images, but the use of docker-compose runs into a new problem, noted upstream: containers/podman#11413 |
Monthly-ish update (@metasoarous cc'd too, figure this is a better place for the conversation than deep in #1060 ): Podman may eventually be the preferred macOS outcome here, but it requires upstream changes to QEMU. After a bit of a deeper dive, all containerization solutions on macOS (including docker) all operate by running a Linux VM, which get a nice performance boost via macOS's Hypervisor framework et al, but to @metasoarous's point it's still not quite as clean as normal chroot. Eventually QEMU will probably need to support better file i/o on macOS (I bit the bullet and am trying to push that forward at https://lists.nongnu.org/archive/html/qemu-devel/2021-10/msg03006.html) for this to work better, but for now the best working solution on Mac, with admittedly non-ideal file I/O performance is lima, which includes the docker-compose compatible interface nerdctl. Long story short, for now on Mac a functional development workflow which (generally) seems to work on my 2017 MacBook Pro:
Once QEMU sorts its stuff out and Podman picks a path forward, I'll update this PR to see what remains. The only pending issue I currently see ultimately needing to sit in this PR might be a decision around ports. Podman / Lima unlike Docker tend to encourage users to run containers rootless, which generally seems like a reasonable security measure, but means that things like maildev on port 25 run into permission errors. Once I get a little further into this I may ask about some options (perhaps a second compose yml file) that keeps ports above 1024. |
This is great @willcohen! Thanks so much for continuing to push on this. Using My only question with the PR as it stands is why all of the image names have been modified to point to Thanks again! |
It's a change that's no longer relevant, I just haven't updated this PR yet to undo that until I get a fully working environment. Pre v3.4 podman insisted on fully qualified domain names to avoid spoofing registries but backed off of that once they did a wider release since it would invalidate so many Dockerfiles. |
Got it; Thanks for the clarification. For now, let's go ahead and revert those changes, but I'll add an issue to consider whether we want to fully qualify in the future. I'll need to spend some time thinking through (and researching) all the implications there before pulling the trigger on that, but it may end up being a good idea anyway if not necessary. Thanks again! |
I actually stand partially corrected -- I wanted to double check what I said, but it turns out I may have spoke too soon. Fedora (which is podman-centric even though it's supposed to be pretty docker compliant) still chokes on the non-qualified domain names, and nerdctl (which still does run containerd, so it's not that far off from docker vanilla) is getting tripped up on |
No apologies necessary; Thanks again for your persistence! |
Regarding the maildev port; I don't know how strict the port requirements are there. It may not be a big deal to move that if it's just for the dev environment. maildev is just for dev anyway, so whatever gets that piece working is fine, as long as it's working. Regarding the Since there's still friction here, thought I'd mention that I've seen some talk of Rancher being used to replace Docker Desktop, but it may be more restricted in capacity to building, and not be as appropriate as a dev runtime, like we're looking for here. Thanks again |
I saw that too! Rancher Desktop uses lima underneath, so solving it for lima should solve it for Rancher. Podman (via Fedora, so no Mac in the picture) is able to run everything via docker-compose as long as everything is fully-qualified with
For
If for
What images exist on Docker Hub? It seems like
|
Issue submitted with nerdctl, since it appears specific to that implementation of compose: containerd/nerdctl#434 |
Fully qualify domain names in Dockerfiles and docker-compose. Move the build stages for client-report, client-participation, and client-admin into the file-server Dockerfile, to avoid issues with non-Docker build daemons being unable to access local image stores. This may be an issue with coming Docker versions as well. This commit does not fully work, however, with issues still pending about serving the newly built files from the right location.
@metasoarous pushed with a partial fix. Per the issue submitted with limactl, it looks like the A not-totally-ideal fix for this is to move the build steps for I updated the branch to fully qualify all domain names and modify For now the branch still includes the three existing Dockerfiles in |
@ballPointPenguin @patcon I am also noticing that similar issues around changing the Dockerfile contexts came up here with another docker PR: #553 (comment) |
# # Gulp v3 stops us from upgrading beyond Node v11 | ||
FROM docker.io/node:11.15.0-alpine | ||
|
||
WORKDIR ../../client-participation/app |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be just /client-participation/app
. Or even /app
, since you end up getting a new image for each call to FROM
IIUC.
Hey @willcohen. Thanks again for pushing this forward!
I don't think this is the case, based on later responses from the thread you linked to, but I (and the comment author there) could be mistaken. Regardless, I'm feeling a bit mixed about merging all of the client build processes this way. We've talked about moving all of the client code into a single subdirectory, with a unified build process, so there's maybe a case to be made that this helps move us in that direction. This may also help me with the task I'm currently working on (unifying the static asset deployment process with the rest of our heroku deployment infrastructure, but there are potentially some other directions I can go there). I suppose a good dose of my objection is simply aesthetic, because I don't see any concrete technical problems with this approach. I may have to sleep on it a bit.
That's a bit surprising. Looks like the build steps are the same. Did you check to see if other targets are being properly created there? I was initially a bit suspicious about the Please let me know if you're able to figure out what's going on here, and if not I'll try to take a look later this week. Thanks again! |
Really appreciate all this work @willcohen! 🎉 I'll try to review over the weekend for a third opinion, if helpful. Disclaimer: I still need to really digest all that's written here in issue. If I'm understanding Chris correctly, I'm also feeling a little remiss about moving the build context up a level. This means that all the files are sent into each container, which feels tangled. I recall there was loose consensus that this felt like a "code smell" when we had a couple people in the convo pooling brainpower on docker conventions. But out of respect for all your work, I'm trying not to have a strong opinion :) Just hoping to understand if there's another way. As I said, I'll re-read the thread later. Potential alternative: You mention that this came about because podman now handles docker-compose files rather than requiring kubernetes. Would it be helpful to instead use EDIT: Further thought, as things come back to me. This would seem to make for a proliferation of docker images, since any change regenerates the whole cascade a multi-GB images, since they're now cache misses at each layer. Also, less important, but for those using docker to develop: I think using this single container breaks the way incremental building is possible when one just wants to rebuild one container that's being worked on, to not have the rebuild the whole thing. This is mostly what makes development in docker feel tolerable, since it's pretty quick when the other containers are cached, and layers with |
Thanks for sharing your thoughts @patcon.
I think it only means that all of the files will be pushed in for the client build process, which doesn't seem as bad. This could also be gotten around by putting all of the clients under a single
Is this still true if we remove the individual images, and just have one single
@colinmegill has been working on a server branch with live code reloading, and there's already something like this for the Clojure/math component (a live REPL connection you can use for re-evaluating code, so a little different, but similar outcome). Seems like the right way to go to improve developer flow for the clients is to just add live code reloading for them as well, since this solves the problem.
To me this is really the crux of the issue. Is it our goal to eventually consolidate the individual clients into single build system with multiple targets? This would remove a lot of boilerplate and duplication, but is also a bit of work (as mentioned above) because of how old the tech in the participation client is. What are your thoughts on this @colinmegill? In conclusion...Right now this PR is potentially useful to us, as I'm trying to consolidate the client build and deploy infrastructure in a way that allows Heroku run these tasks, so that we have a consistent/comprehensive deploy workflow for our production instance. This appears to only be possible if we coalesce all of the client build steps in a single To test out if this works, I'm going to merge into a dedicated branch, and from there into the Let's continue discussing here to preserve context; If we decide to merge into dev, we can start a new PR for that. Thanks! |
Apologies for the delay in the response. I think everyone's thoughts here accurately reflect the fact that all of the options seems to carry upsides and downsides. Let me know how best I can help next there. Separately, apologies for the lack of bandwidth, I'm still in the rabbit hole of trying to get QEMU patched so podman works better on mac etc etc etc. Based on their release cycle, with any luck, that'll get wrapped up before the end of this year, QEMU will handle volumes better by the first major point release of 2022, podman will use that shortly after, and either way Podman vanilla should be working correctly with volume mounts on Mac from there, which ultimately provides one performant Mac dev workflow alternative to Docker Desktop. Some of the specific details about Dockerfile organization still unresolved here will simply be to help expedite the additional optionality of Lima/Rancher on Mac, too. |
@metasoarous quick update here: with the 7.0 release of qemu, and soon-to-be backported to 6.2, podman will be able to mount volumes on mac, which will enable the dev workflow to work with docker compose. |
No description provided.