-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use k8s cluster default imagePullPolicy #50
Comments
cc @dmikey |
@arno01, this is expected behavior. You need to use a different tag to deploy updates. Images are heavily cached, pushing changes to a single tag will cause both versions to be running very easily, even if we forced a pull every time. |
@boz this should be mentioned in the Akash Docs then, since I've already seen two cases where people would not figure why their image is not working at one provider but does at the other one. 1 2 And then this breaks the default Kubernetes behavior for the
I would actually be more inclined to let people specify in their deployment manifests the And then, most of people are going to be using same tags, e.g. |
it is best practice to use a version tag. we can add that to the docs. |
It says as much right in your source
|
I agree with that recommendation, however the concern I've raised in this issue has nothing to do with that recommendation. This is a recommendation for running the production containers which many of people do not. As I'm typing this, there is a 3rd occurrence I'm seeing people are struggling because of this and I'm having to explain them the Akash providers are heavily caching the images, they should use a new tag for every new image update. My point is not to lobby for using the Update 1: Basically this deviation from the defaults is not just a nuisance but rather a security issue in not having to re-pull the images. (simplest example: |
100% agree with @arno01 |
My recommendation back when mainnet launched was to block |
It should guarantee that for the moment someone deploys the app, which isn't working that way as of now because of This is how K8s works and have always been working by default everywhere for the I'd prefer sticking with the widely accepted defaults (for And should we want As a compromise, the |
To get the default image pull policy, we should only remove this line https://github.com/ovrclk/akash/blob/v0.16.4/provider/cluster/kube/builder/workload.go#L56 From the doc
So K8s sets
As @tidrolpolelsef said, it's one of those weird config options where 'unset' has its own unique meaning :-) |
FWIW, I've tested this further, apparently the https://asciinema.org/a/541059 The Docker Hub API isn't too restrictive ( |
Wanted to update this thread after my discussion with @boz today:
For these reasons we are going to work to allow this. The only concern/ downside is making sure we document this behavior clearly and note that a running pod will not be updated unless the pod is terminated (forcing a new pod to be spun up, which will result in the "latest" latest image to be pulled down) - which I believe is how solutions like the presearch "autopdater" handle this anyway (their autoupdater is a fork of https://github.com/containrrr/watchtower) |
And this (the pod restart) can be easily triggered from within the pod itself or the outside (through lease-shell, then |
Completely agree - this isn't just about +100 for |
Reproducer
Update: this is mostly about the image pull policy for the images with the
:latest
and untagged images as seen from below discussion. Other tags, such astest:123
should staytest:123
1:1 (immutable).Expected behavior:
Provider should pull the new image if it has
:latest
tag or untagged.Actual behavior:
The provider will not pull the new image, it will start the old image.
Workaround:
There is no workaround to a changed default K8s behavior.
Provider in question:
The text was updated successfully, but these errors were encountered: