-
Notifications
You must be signed in to change notification settings - Fork 885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
:latest-dev
fails on image updates and is removing containers
#1809
Comments
Hi there! 👋🏼 As you're new to this repo, we'd like to suggest that you read our code of conduct as well as our contribution guidelines. Thanks a bunch for opening your first issue! 🙏 |
:latest-dev
failing on image updates and removing containers:latest-dev
fails on image updates and is removing containers
Same problem for me |
@spupuz were you using any of the ENVs mentioned in step 2 above? Please provide your environment information as above (may not be necessary, but might as well while waiting). |
|
Thanks for reporting this! This was just me not testing "unaffected" containers (and forgetting that everything can be |
also this morning got 2 installation with different container update but not recreated |
Ta, working. |
Spoke too soon. The first part was resolved, other containers are no longer being removed on update, but I have 93 duplicate Watchtower containers as a result of the failed Watchtower updates:
|
Are you saying that it keeps spawning new ones? |
Yes. From the logs it seems that the old container isn't stopped, so can't be removed. |
|
@ilike2burnthing You said that
Did that not include the watchtower instances? As long as you have the old broken one it might continue to misbehave, although it should be removed by the new watchtower instance. Looking at @NonaSuomy s logs it seems like the "random" names are not so random and it conflicts with an old failed update... Perhaps we can fix it by altering the random name algorithm... If you want to just fix your installs right now, just remove all watchtower instances and create a new one with the current |
The only duplicates were Watchtower containers, so yes I removed all of them. The two main error messages from my logs above were:
and
The other containers' logs showed errors that their container names were also in use. The Watchtower container is set to |
On your last note mine are all set to restart: no as it conflicts with my docker-net-DHCP plugin causing them to all lock up at boot because of a bug in their code base. |
Describe the bug
Following #1800 and #1801 being merged,
:latest-dev
is failing on image updates, resulting in containers being removed without being replaced.Likely related, when Watchtower updated itself it (presumably) failed repeatedly for hours, resulting in 96 copies of the Watchtower container being made, CPU usage being pinned to 99%, and my NAS becoming unresponsive. I (eventually) stopped all containers, removed the duplicates, started my normal containers again, and pulled a new Watchtower image. The issue didn't reoccur, so I put it down to a corrupted update or temporary issue on my end. However, since then 4 containers have been removed, with related errors in Watchtower's logs.
Steps to reproduce
:latest-dev
containerWATCHTOWER_CLEANUP
,WATCHTOWER_INCLUDE_RESTARTING
, andWATCHTOWER_INCLUDE_STOPPED
were set totrue
Expected behavior
Updates behave normally.
Screenshots
No response
Environment
Your logs
Additional context
Apologies for the lack of debug logs, I don't really want to try to repeat this.
The text was updated successfully, but these errors were encountered: