-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mailcow on podman do not work and produce a funny error message #11719
Comments
Maybe related to #11493 |
Can you start the podman service with |
I changend the service file
After that I reloaded the Systemd daemon and restarted the podman service:
After that I deleted all containers, networks, volumes etc. and rebuild them with Gather logs:
|
Does it work after you run |
Nope. Same error. I did:
Message:
|
Any chance that you could try this with #11751? |
Thanks for that! Update: |
Now I compiled the main branch from https://github.com/containers/podman.git [root@localhost podman]# podman --version
podman version 4.0.0-dev But I got the same error: Creating mailcowdockerized_unbound-mailcow_1 ... done
Creating mailcowdockerized_redis-mailcow_1 ... done
Creating mailcowdockerized_sogo-mailcow_1 ... done
Creating mailcowdockerized_clamd-mailcow_1 ... done
Creating mailcowdockerized_memcached-mailcow_1 ... done
Creating mailcowdockerized_watchdog-mailcow_1 ... done
Creating mailcowdockerized_olefy-mailcow_1 ... done
Creating mailcowdockerized_dockerapi-mailcow_1 ... done
Creating mailcowdockerized_solr-mailcow_1 ... done
Creating mailcowdockerized_mysql-mailcow_1 ... done
Creating mailcowdockerized_php-fpm-mailcow_1 ... done
Creating mailcowdockerized_nginx-mailcow_1 ... error
Creating mailcowdockerized_postfix-mailcow_1 ...
Creating mailcowdockerized_dovecot-mailcow_1 ...
Creating mailcowdockerized_dovecot-mailcow_1 ... error
ERROR: for mailcowdockerized_dovecot-mailcow_1 Cannot start service dovecot-mailcow: error configuring network namespace for container 7c645ef1024fab8ea4706c66d7374dd9b769c8cb16fe57Creating mailcowdockerized_postfix-mailcow_1 ... done
r range 0: requested IP address 172.22.1.250 is not available in range set 172.22.1.1-172.22.1.254
ERROR: for nginx-mailcow Cannot create container for service nginx-mailcow: container create: invalid IP address : in port mapping
ERROR: for dovecot-mailcow Cannot start service dovecot-mailcow: error configuring network namespace for container 7c645ef1024fab8ea4706c66d7374dd9b769c8cb16fe57d09f135ce669b9dfab: error adding pod mailcowdockerized_dovecot-mailcow_1_mailcowdockerized_dovecot-mailcow_1 to CNI network "mailcowdockerized_mailcow-network": failed to allocate for range 0: requested IP address 172.22.1.250 is not available in range set 172.22.1.1-172.22.1.254
ERROR: Encountered errors while bringing up the project. |
Can you run |
Sorry Luap. I did something wrong. I did your rebase but I already installed podman-docker. But podman-docker requires an old podman version. So dnf downgraded my podman 4.0.0-dev version to the old podman version. So maybe you can help me with that. # Install Dependencies
dnf install -y go git
dnf groupinstall -y "Development Tools"
subscription-manager repos --enable=codeready-builder-for-rhel-8-x86_64-rpms
# Compile Podman
cd ~
git clone https://github.com/containers/podman.git
cd podman
git pull --rebase https://github.com/Luap99/libpod net-alias
make package-install
systemctl enable podman.socket --now
curl -L https://github.com/docker/compose/releases/download/$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/sbin/docker-compose
chmod +x /usr/local/sbin/docker-compose Normally I would install now [root@localhost podman]# export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock
[root@localhost podman]# curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping
curl: (7) Couldn't connect to server
[root@localhost podman]# Maybe you have a hint for me, to compile the podman-docker package against my new podman version. |
You can just run |
Hello Luap, Sorry for the latency, but I had so much to do in the last time. Now I was able to use your hint with the socket. The socket works: [root@localhost mailcow-dockerized]# curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping
OK[root@localhost mailcow-dockerized]# If I try the Creating mailcowdockerized_unbound-mailcow_1 ... done
Creating mailcowdockerized_sogo-mailcow_1 ...
Creating mailcowdockerized_redis-mailcow_1 ... error
Creating mailcowdockerized_solr-mailcow_1 ... done
Creating mailcowdockerized_sogo-mailcow_1 ... done
Creating mailcowdockerized_dockerapi-mailcow_1 ... done
Creating mailcowdockerized_memcached-mailcow_1 ... done
Creating mailcowdockerized_watchdog-mailcow_1 ... done
Creating mailcowdockerized_clamd-mailcow_1 ... done
Creating mailcowdockerized_mysql-mailcow_1 ...
ERROR: for mailcowdockerized_redis-mailcow_1 Cannot start service redis-mailcow: plugin type="bridge" failed (add): cni plugin bridge failed: failed to allocate for range 0: requestCreating mailcowdockerized_mysql-mailcow_1 ... done
Creating mailcowdockerized_postfix-mailcow_1 ...
Creating mailcowdockerized_dovecot-mailcow_1 ... error
Creating mailcowdockerized_postfix-mailcow_1 ... done
uested IP address 172.22.1.250 is not available in range set 172.22.1.1-172.22.1.254
ERROR: for redis-mailcow Cannot start service redis-mailcow: plugin type="bridge" failed (add): cni plugin bridge failed: failed to allocate for range 0: requested IP address 172.22.1.249 is not available in range set 172.22.1.1-172.22.1.254
ERROR: for dovecot-mailcow Cannot start service dovecot-mailcow: plugin type="bridge" failed (add): cni plugin bridge failed: failed to allocate for range 0: requested IP address 172.22.1.250 is not available in range set 172.22.1.1-172.22.1.254
ERROR: Encountered errors while bringing up the project.
[root@localhost mailcow-dockerized]# But the error message is different than in the past. |
@Luap99 Cheers and a nice weekend, |
A friendly reminder that this issue had no activity for 30 days. |
@Mordecaine Sorry I do not have time to debug this issue further. We are currently working a new network backend called netavark which hopefully also fixes this issue. |
A friendly reminder that this issue had no activity for 30 days. |
Still interesting |
This might be fixed with the new network redesign. |
should be fixed in 4.0 with netavark, re-open if not. |
@baude - Unfortunately, netavark and Podman4 result in a similar issue. Here's output of the docker-compose:
This is on a fresh server running commands as non-root administrative account.
|
Do you run docker-compose against the root socket? Your (rootless) podman info says |
Understood. I have not rebooted this server after installing Podman4 but I did reboot my production server after the upgrade and ran into the same error I posted above. I just checked and my production server is still using CNI as well. I followed this guide to upgrade from Podman3.4 to 4.0.2 - https://podman.io/blogs/2022/02/04/network-usage.html How can I force root to use the new netavark instead of CNI? Edit: I also want to mention I stopped all my systemd services, removed all containers and local images, and then ran the podman system reset command before the upgrade on my production server. I then performed another dnf update and rebooted. Running Fedora 35 Server. |
A reboot should not be required. If you run Also did you run |
Thanks, I'm going to work through that now. On my production system I was logged in as root when I ran the reset. This was after all systemd services disabled and all containers manually tore down and removed all image stores. I was thinking another reset was in order as well. I ran it again on my test system and it appears netavark is default when running "sudo podman info". I'm going to go through my production system again and clear out the /etc/cni/net.d folder. There are some other files in there besides the default you mention. I'll report back once that's complete! |
Appears my root is still using CNI backend:
That lock file appears after a reset again, I assume it is a required lock file. We can move this to a new issue if needed as this is no longer specific to the original title. |
Yeah the lockfile will be recreated every time, I was mostly worried about other .conflist files. The only other reason I can think of why it is not choosing netavark is because it is not installed, but this cannot be the case since it worked in rootless mode so it is definitely installed. To force netavark you can set it in containers.conf, add the following in
|
Well that's strange, that file does not exist on the system. I did read through the podman doc and saw that reference but wasn't sure if it may have been installed elsewhere. Creating it with just that entry gives this error:
I'm going to find a standard config file and build one from scratch and see if that helps anything. It appears my test system also does not have this file preset. Edit: I found a standard config but it's missing the "helper_binaries" section. Where is netavark installed and I can put that in as a helper_binaries_dir variable? Edit2: I'm deploying a new test server, Fedora 35 with root account enabled to test this with. I will be installing Podman4 out of the gate and can try to reproduce what we've done so far. |
I guess netavark is not installed run |
Hey, that would help!
Should this not be installed as part of the Podman4 upgrade? Edit: After installing and performing another "podman system reset", podman shows the proper backend as netavark! |
If you already have podman installed it will keep cni to prevent breaking to many people. |
Okay - thanks for the help getting that sorted out, I've made some progress. I turned up a brand new Fedora 35 server, installed Podman4, it pulled down netavark on initial upgrade and all is well there. Pulled down mailcow-dockerized from their github to the local system and got it running, but had to make a few tweaks to their compose file for it to work. To get it working I had to modify the following lines: Comment out line 513 on the dockerapi-mailcow container:
This caused the following error after docker-compose if left enabled:
Directly Map ports to the nginx-mailcow container.
Changed Variables
The original variables would error out with the following:
At this time, all of the containers are now running with the exception of netfilter-mailcow and ipv6nat-mailcow. Those were created but are crashing after one second. I'm new to MailCow though and am not certain if these are required for services to actually run. This is probably beyond a Podman issue now as the containers can start. I can probably move onto the MailCow github and open an issue with them if you feel the above is working as intended? Perhaps the oom issue needs looked into? Edit: I forgot to mention this was all completed as rootfull as root account. I'm honestly still learning in's and out's of rootless and wanted to eliminate that as a possible blocking point in my test. |
On docker ipv6nat-mailcow is not needed when docker is running natively with ipv6 or if you don't care about ipv6: https://mailcow.github.io/mailcow-dockerized-docs/post_installation/firststeps-disable_ipv6/ netfilter-mailcow is quite important for a public mail server, it's like a fail2ban service. it avoids bruteforce and things similar to that |
Thanks. I figured the IPv6 wasn't needed as I do have it disabled on the host and did find that document prior to deployment, but wanted to just get the stack working before any major changes. NetFilter definitely sounds important and is not crashing after one second after disabling IPv6 in it's entirety (including the nat1pv6 container). NetFilter is still crashing after about 20 or so seconds, and so in NGINX after changing the port mapping to a direct 80:80 and 443:443 mapping. Seems there's still just a couple tweaks to make to get it up properly. Going to keep plugging away as this is pretty close to working! |
Technically speaking all issues with the compose file against podman are podman bugs. We are trying to match the docker API, there are a few exceptions, the biggest thing is that we do not support docker swarm. If you could create a small reproducer for both problems and create separate issues for them, this would help getting them fixed. |
So Nginx error is resolved, somewhat. There's an HTTP/HTTPS_BIND variable in the mailcow.conf these point to. The Conf file says to leave them empty but in their documentation they have them as 127.0.0.1 pointing back to the host. I entered this into the conf and that error is resolved. NGINX still crashing for some reason, but at least no errors with it running the docker-compose now. I just need to find where to get the logs for that container to see what it's barking about. OOM still an issue and will need a separate issue created. Is isolating that to it's own docker-compose file sufficient when opening a new issue, or should I re-create it in another way? I'm not super versed in linux troubleshooting. I've gotten by getting about 20 different services/containers up in Podman on my own but this is the first actual wall I've hit trying to turn up a new one. |
The podman package will not install netavark when CNI is already installed. Ref: containers/podman#11719 (comment) Signed-off-by: Paul Holzinger <[email protected]>
After some further testing, there's a communication issue between some of the containers. NGINX won't load because it can't connect to PHP-FPM. PHP-FPM won't load because it can't resolve to the REDIS container. The docker-compose has them all using an internal unbound container for resolution. I connected to the local unbound and confirmed it was able to resolve outside IP addresses. When pinging the internal REDIS container, it resolves correctly:
I tried adding a static entry under the PHP-FPM container for redis but this did not resolve the connection issue for the container. It can ping the name in the host file, but nslookup resolves it as NXDOMAIN:
I think I'm going to give up on this for now and just deploy on Ubuntu with Docker. If anyone else makes any breakthrough's I'd be happy to collab and test any configurations out. Edit: I wanted to add that REDIS was able to resolve the PHP-FPM container through unbound just fine, and unbound was able to resolve both REDIS and PHP-FPM as expected. Edit2: I also disabled SELinux in testing as well. I do not think this was the culprit due to the way Podman handles it, but wanted to also mention that. |
The podman package will not install netavark when CNI is already installed. Ref: containers/podman#11719 (comment) Signed-off-by: Paul Holzinger <[email protected]>
Alright, I'm stubborn and my whole ecosystem is Fedora/Podman and I'd like to try and keep it that way. I just spun up Ubuntu with Docker and it worked flawlessly following these steps:
I spun up a brand new Fedora 35 instance again leaving IPv6 enabled, updated to Podman4, following the same steps as above (no patch as the previous OP stated) and ran into the same two issues previously. I had to disable OOM in the docker-compose file, and also point the HTTP_BIND/HTTPS_BIND variables to 127.0.0.1 for docker-compose to work at all. NGINX, IP6NAT and NETFILTER still crashing as before. That it. I'm done for now. Definition of insanity... or is it science? Either way, I'm passing the ball to someone else as I'm a bit over my head on where to go from here. |
The podman package will not install netavark when CNI is already installed. Ref: containers/podman#11719 (comment) Signed-off-by: Paul Holzinger <[email protected]>
/kind bug
Description:
I tried to run Mailcow on podman with docker-compose. I get the following error:
Steps to reproduce the issue:
docker-compose.yml
because podman is not able to use thishttps://mailcow.github.io/mailcow-dockerized-docs/i_u_m_install/
Describe the results you received:
See Description
Describe the results you expected:
All containers should start successfully
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)
No
Additional environment details (AWS, VirtualBox, physical, etc.):
Virtual Box
The text was updated successfully, but these errors were encountered: