-
-
Notifications
You must be signed in to change notification settings - Fork 418
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Discussion] Better service management #46
Comments
Hi @boostchicken Just a progress report. I got a working container which contains systemd and nested podman running. The idea would be to only start this container by the systemd of unifi-os through the ssh_proxy. I'm working on packaging this test into a deb package, so we could test it a bit more. Especially the network part is not tested yet. Just have the premise this works somehow if we use the host-network. |
just as reference: #50 |
@spali been super busy lately, your PR is huge, let me give it a look. try to this weekend |
@boostchicken In short it does completely separate the podman and the custom containers by running it in a Also before merging I would like to improve the podman network part. The simplest solution would be to configure the default network as host to prevent podman from creating iptables entries by default to prevent issues like #49. Maybe we should do this anyway independent of the other network stuff for users just want to run a simple service container like ntp. First idea was, to implement a macvlan with IPAM dhcp to allow the containers getting their IP dynamically from the UDM itself. But because macvlan can't talk to the host itself this doesn't work without an external DHCP. Second try was a bridge interface with ipam set to dhcp and the cni dhcp daemon (daemon itself already included in this PR). If we can get a bridged network working using a fixed mac address per container that works with dhcp, this would simplify the networking part a lot for users that just want to run containers with an dedicated IP for the dns stuff. |
This is exciting development!
In fact if you combine the two it makes for a whole user friendly package, then instruction/references could simply go like Also I can see use in having both the "legacy" boot script on the baremetal UDM and the systemd based (in the udm-boot container) in place, similar to how systemd still works with sysv init.d.. one is much more familiar and simple to use, but more importantly allows the use of both the native podman (or other boot scripts) and a more advanced mechanism in the udm-boot container. With that said, I'd love to help out though there hasn't been any activity in 3 months on this so I would hate to start making changes or coming up with any ideas to either branch if additional progress has been made behind the scenes by either of you |
@senseisimple Currently I have two things, that I would like to have solved.
So if you have ideas or time to help here out, let me know. |
@spali That only solves half of the problem though. That is where the abandoned udm-launcher comes in. I hate to make yet another package manager, but short of stealing deb packages and moving them back up to the main OS from the unifi-os container, there is nothing. @senseisimple @spali I leave this to you guys, I might have time to pitch in here shortly. If a decision is made on how we are going to move forward, I will support it. I still like the idea of the udm-launcher app that can just read config files and apply networking and file changes. Makes it dead simple. |
In my eyes a basic podman knowledge should be expected from the users. But I think the UDM specific stuff can be made easier for the users. Also the more advanced stuff like networking can be abstracted maybe. |
I would like to add my opinion that I would really like the option to revert to the old non-container way without installing the boot container. I already have a custom container I use to run all my custom services that I created with my own parameters. I do not need another one that I did not create. I prefer not to excessively use docker and have lots of containers on my system, so I like to put everything in one custom container. This boot script gave me an easy solution to start my custom container on boot, or even do other custom non-container setups like modifying system configuration files on startup, or running compiled golang programs directly without a container. If you force us to install the udm-boot container, then I will have an extra container installed just wasting space and resources. If the non-container way works properly without issue, I would really appreciate the option to not install the udm-boot container and just use the old way. It was versatile, could be adapted for anything, and worked very well for many use cases, not just container startups. I appreciate the work you put into all this, but I just prefer the option because the new way doesn't fit all my use cases or makes solving them unnecessarily complex. |
@peacey Technical: Or just live with the container, as it does almost not need any resources when running idle. Except from a bit of space. Just to say, it would be do-able, but in my eyes breaks the modularized approach that it should work for anyone a bit. At the end John should and has to decide, he has still to decide anyway how to proceed with this PR in general. I think it's a much cleaner way to have the modularized approach to completely separate and encapsulate everything in a own container to keep the UDM itself as clean as possible, especially for services not just scripts to hack around on the udm. For me, the motivation is, I only have one UDM and it's in production. So I want a "kill-switch" for everything that just works, but with the maximum of flexibility. Worst case for me would be, I split of my variant completely which would be also a lot less work on my side ;) Just to be clear I don't feel offended and I really understand your point, just wanted to explain a bit the background from my view. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
I made an issue for discussion what started in #45.
I would like to implement a clean service management instead of just the current
on_boot.sh
one shot script that executes other scripts, which also supports scheduling.First idea was to use systemd, supervisor or some other "process manager" inside a container to start the stuff that is currently in
on_boot.d
. We could also reuse the systemd service from the unifi-os container.But both solutions have the same problem. podman is only available on the host, so we need to ssh from inside the container to the host to start containers.
Ubiquiti has already solved part of the problem by using an
ssh_proxy
command, which can be used to execute commands on the host (UDM) outside the unifi-os container. But then we need to prefix every command execution in the service files with this ssh proxy.So the challenge would be to make this transparent for the service definition (unit files in case of systemd).
One possible compromise could be to only "wrap" the
podman
command inside the unifi-os container to a shell script that use the ssh_proxy and pass everything to the podman on the host. And everything else (script's etc.) would be executed inside the unifi-os container.Another possibility would be to somehow run our own podman including systemd inside a
udm-boot
container. Not quit sure how easy this is to implement. But would have the advantage of beeing more isolated to not mess to much with the UDM host and the unifi-os container.Any other ideas, improvements or comments?
The text was updated successfully, but these errors were encountered: