-
Notifications
You must be signed in to change notification settings - Fork 314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker daemon version out of date #63
Comments
@bacongobbler thanks for reporting this. I have opened an issue in ACS Engine: Azure/acs-engine#1865 |
closing, let's follow up in that ticket. :) |
I’m re-opening this issue for two reasons:
I think we can call this closed once either an AKS cluster can be deployed with newer releases of docker or we document the reasons against that. :) |
In my case I want to run my ci build agents inside of my kubernetes cluster. A more current version of Docker is required In order to be able to do multi stage builds in a Dockerfile which is the recommended way to build a .net core docker container. |
+1 Any idea when this will be available since the issue has already been resolved in acs-engine? |
+1 Any update on the status of that issue? |
current deploy with 1.9.6 is pointing to docker 1.13, still 17.05 is required for multistage builds. This was reported 5 months ago, and yet not a simple status update by aks team |
@slack @jackfrancis a status update will be appreciated |
Instruction for manual workaround is here: Azure/acs-engine#2589 (comment) But careful as it probably will make your cluster out of SLA (not big deal in current AKS state, you probably need ot create a new cluster each time you need upgrade or change something) |
Just FYI:
from https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies |
Sorry for letting this issue age. We are locked to Docker 1.13 (upgraded from 1.12.6). We will not be jumping to 17.03 or 17.05 in AKS. This does disallow cluster-side building of images, but AKS can still run the resulting artifact. If you need cluster-side multi-stage builds, you will need to use acs-engine directly. |
closing as answered/wontfix |
Very strange to keep a too old docker version |
Just to clarify: strange isn't quite the right way to interpret the situation. Rather: we agree that it is not ideal that Docker and MS do not have mutually agreeable distribution partnerships for Docker CE, but until that date we are unable to include Docker CE w/ AKS clusters. |
Are you working on getting this partnership then :) ? |
@lkt82 Yes |
@jackfrancis This just bit me really hard. What can we as the community do to help this get achieved? |
We are stuck here at Engie :( |
@aminebizid you can upvote https://feedback.azure.com/forums/914020-azure-kubernetes-service-aks to help us prioritize. thanks! |
Is there any movement on this issue ? |
@marcel-dempers there are some other issues where the AKS team commented that they're switching the underlying docker engine to moby/moby rather than Docker CE/EE to handle this licensing issue. If I'm not mistaken, work on this is "started". I don't think they've set a release timeline yet, though. |
We are at the last mile w/ deprecating docker-engine in favor of moby: We want to do a phased rollout, especially to AKS, so bear with us as we setup VHD (pre-baked image) pipelines and introduce this into AKS regions gradually. ETA for availability in acs-engine is end of this week, for initial AKS rollout the week after next (early November). |
This will be available behind a feature flag rolling out next week:
After your subscription has been registered for that feature, you'll have to:
To get it onto your exsiting AKS cluster, you'll have to upgrade or scale in/out after the subscription that owns the cluster is registered for the above feature. For folks on 1.11, |
Hi @jackfrancis, I don't understand, would it be possible to upgrade the docker daemon inside an existing aks cluster ? And when ? Current installed docker 1.13.1 does not allow ARG before FROM, we need something above ~17.09. Would it break everything if upgraded manually ? Thanks in advance |
@DBarthe If the subscription that manages that cluster is registered for the feature, then yes, a cluster upgrade will get you vm nodes that have moby as docker runtime. What region is your cluster in? |
@jackfrancis Hey there. How long does registration for the features take? Or is this the region roll out you were referring to and I just misunderstood? I've run My AKS cluster is in West Europe. |
@DinoSourcesRex could you share the output? The release hasn't landed in westeurope yet. ETA: within the next 24 hours |
Output from az provider show -n Microsoft.ContainerService - guids
|
Actually, sorry @DinoSourcesRex, try the |
@jackfrancis I'm marked as "registered" on that one, however when I run
as the response.
|
@jackfrancis 1.11.4 landed here. The upgrade will change to moby or I still have to enable the feature with steps provided above? |
@DinoSourcesRex I get the same behaviour ("Registering is still on-going") but I can't see 1.11.4 offered as an upgrade option yet (I'm in West Europe). I'm assuming/guessing that these are related, so I'll try again once 1.11.4 is visible. |
@DaveAurionix Bit of a daft question but how do I check to see if / when 1.11.4 is available? And how do I check what I'm currently running? |
I used … GUI way: take a look in Azure Portal at your Kubernetes Service resource, click the Upgrade tab on the left and see the current (and possible) versions in the drop-down. CLI way: do an Note that in both cases the resource group to check/use is not the auto-generated resource group (starting with MC_) with the VMs, load balancer, etc, instead it's the resource group you created to contain your managed cluster resource. |
@DaveAurionix you can pass just the location :-)
|
Way easier :) |
Awesome, thanks guys. @edernucci Did you do anything special to get yours formatted as a table? Mine comes back as json which I can read but the table is a lot nicer to look at! |
@DinoSourcesRex It's |
Thanks guys, learning a lot here! |
@jackfrancis There's something I did not quite understand. When I enable the moby feature for my subscription all my clusters will use moby instead of Docker after the node is recreated? I ask this because I have three clusters and would like to rollout only in the development, to see how it will behave. |
@edernucci, the feature flag functionality is per-subscription, so any clusters operations (create/upgrade/scale) using that subscription will be paved w/ vm nodes that have the moby docker runtime installed. Thanks all for the community spirit here! |
@DinoSourcesRex If you like the table view and want it to be the default, you can set it using |
So when using the new MobyImage, how do we tell exactly what version of Docker is installed? I see this:
|
Hi there, ➜ ~ az feature list --namespace Microsoft.ContainerService -o table
Name RegistrationState
-------------------------------------------------------- -------------------
Microsoft.ContainerService/MobyImage Registered
➜ ~ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-agentpool-26745425-0 Ready agent 2h v1.11.4 10.240.0.5 <none> Ubuntu 16.04.5 LTS 4.15.0-1030-azure docker://1.13.1 |
@timwebster9, that docker version correlates with the moby package, so you're 👍 @laurentgrangeau that's not the expected outcome if you were registered before you upgraded. :( Are you able to build a new cluster on the registered sub and report if you still get 1.13.1? |
@jackfrancis where can you see how the moby package version correlates to moby? |
@jackfrancis I just created a new cluster, but I got the same. Maybe the Moby image is not rollout yet in North Europe ➜ .kube az aks list -o table
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
--------- ----------- --------------- ------------------- ------------------- ---------------------------------------------------------
mobyhotei northeurope hotei 1.11.4 Succeeded mobyhotei-hotei-001ba5-b861c86c.hcp.northeurope.azmk8s.io
➜ ~ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-nodepool1-63134054-0 Ready agent 4m v1.11.4 10.240.0.5 <none> Ubuntu 16.04.5 LTS 4.15.0-1030-azure docker://1.13.1 |
After upgrading my AKS (it said I was on version 1.11.4 on the Upgrade Page), every time I tried to Scale my AKS, I had the following error:
Logging out and logging back in solved the issue. I hope it can help others if they face this issue. |
provisioning an AKS cluster with v1.8.2 nodes show that the kubelets are running Docker v1.12.6. It'd be great for them to be running 17.06.0-ce, which is minikube's underlying docker version as of v0.24.0.
The text was updated successfully, but these errors were encountered: