Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No such container: weaveproxy / weaveplugin #2628

Closed
andrey01 opened this issue Jun 21, 2017 · 8 comments
Closed

No such container: weaveproxy / weaveplugin #2628

andrey01 opened this issue Jun 21, 2017 · 8 comments
Assignees
Labels
chore Related to fix/refinement/improvement of end user or new/existing developer functionality
Milestone

Comments

@andrey01
Copy link

Dear Devs,

we have noticed on our machines the following messages occurring every 10 seconds:

journalctl -f

Jun 21 16:26:19 srv-master01 docker[12463]: time="2017-06-21T16:26:19.709531017+02:00" level=error msg="Error setting up exec command in container weaveproxy: No such container: weaveproxy"
Jun 21 16:26:19 srv-master01 docker[12463]: time="2017-06-21T16:26:19.709960879+02:00" level=error msg="Handler for POST /containers/weaveproxy/exec returned error: No such container: weaveproxy"
Jun 21 16:26:19 srv-master01 docker[12463]: time="2017-06-21T16:26:19.710237838+02:00" level=error msg="Handler for GET /containers/weaveplugin/json returned error: No such container: weaveplugin"

Jun 21 16:26:29 srv-master01 docker[12463]: time="2017-06-21T16:26:29.710678227+02:00" level=error msg="Error setting up exec command in container weaveproxy: No such container: weaveproxy"
Jun 21 16:26:29 srv-master01 docker[12463]: time="2017-06-21T16:26:29.710714574+02:00" level=error msg="Handler for POST /containers/weaveproxy/exec returned error: No such container: weaveproxy"
Jun 21 16:26:29 srv-master01 docker[12463]: time="2017-06-21T16:26:29.710867804+02:00" level=error msg="Handler for GET /containers/weaveplugin/json returned error: No such container: weaveplugin"

This is coming when weaveworks/scope is deployed.

Any pointers what could be the cause of it?
We are using our custom docker registry, so instead of image: weaveworks/weave... we are using image: our-registry.com/weaveworks/weave

Our versions are:
kubernetes 1.6.5
weaveworks/scope 1.5.0

To deploy weave-net we are applying the https://github.com/weaveworks/weave/blob/master/prog/weave-kube/weave-daemonset-k8s-1.6.yaml

To deploy the weaveworks/scope we are using the manifest file generated by the https://cloud.weave.works/k8s/v1.6/scope.yaml?k8s-service-type=NodePort

I have found the weaveplugin & weaveproxy mentioned in:

c, err := w.dockerClient.InspectContainer("weaveplugin")

Container: "weaveproxy",

var systemImagePrefixes = map[string]struct{}{

Thanks in advance for any pointer.

Let me know if you need more details.

Kind regards,
Andrey Arapov

@2opremio
Copy link
Contributor

You can silence those errors by running the scope agents with --weave=false

@andrey01
Copy link
Author

andrey01 commented Jun 22, 2017

Thanks @2opremio, but what is the purpose, is scope monitoring the weave status using those containers? I won't loose any other functionallity?
I presume those container names are hardcoded so as an alternative I could just create them, in that case what would be the right way to obtain them?

We also see the scope is constantly using up to 10-15% CPU (top) - is this expected/normal behavior?

Thank you in advance!

@2opremio
Copy link
Contributor

2opremio commented Jun 22, 2017

No, you won't lose any other functionality. The purpose of talking to weave is service discovery of Scope Apps (which you don't need in K8s since it's already solved) and displaying information about weave. However, that doesn't work when weave runs as a CNI plugin.

10-15 % is not unusual. Is this for he scope agents or the scope app? How big are your machines? How active are they? (Number of connections, number of processes/containers ....)

@2opremio
Copy link
Contributor

2opremio commented Jun 22, 2017

I presume those container names are hardcoded so as an alternative I could just create them, in that case what would be the right way to obtain them?

Problem is that containers will be named differently in different instances so it will require more work.

@andrey01
Copy link
Author

No, you won't lose any other functionality. The purpose of talking to weave is service discovery of Scope Apps (which you don't need in K8s since it's already solved) and displaying information about weave. However, that doesn't work when weave runs as a CNI plugin.

Ok this is the good news as we are running weave as a CNI plugin!
So we will pass the --weave=false to the scope.

10-15 % is not unusual. Is this for he scope agents or the scope app? How big are your machines? How active are they? (Number of connections, number of processes/containers ....)

We have a test k8s cluster that is of 2 nodes: master and the worker node.
both nodes are VM's of 2x 2.5 GHz vCPU, 64-bit.

They should not be much active as they are mainly running k8s essentials and these deployments:

deploy/weave-scope-app     
deploy/heapster            
deploy/kube-dns            
deploy/kube-state-metrics  
deploy/kubernetes-dashboard
deploy/monitoring-grafana  
deploy/monitoring-influxdb 
deploy/prometheus          

The top output:

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                         
 5167 root      20   0  309268 152020  11996 S   8.8  1.5  19:32.67 scope-app --mode app --no-probe                                                                                                 
 5131 root      20   0  480408 120984  29620 S   7.5  1.2  61:35.41 scope-probe --mode probe --no-app --probe.docker.bridge=docker0 --probe.docker=true --probe.kubernetes=true 10.100.141.135:80   
12506 root      20   0 1344760  75868  26856 S   3.4  0.7  39:00.63 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manif+ 
12214 root      20   0  670532  17344   4368 S   2.7  0.2  30:13.61 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --sta+ 
 4785 root      20   0  168060   2344   1596 R   1.4  0.0   0:00.07 top                                                                                                                             

Thanks for your prompt replies!

@andrey01
Copy link
Author

Having --weave=false set to the scope-agent we have lost the "WEAVE NET" information from the dashboard which now says:

Nothing to show. This can have any of these reasons:
  We haven't received any reports from probes recently. Are the probes properly connected?
  Containers view only: you're not running Docker, or you don't have any containers

So this is the only functionality (the dashboard monitoring for the weave-net) that we are going to lose if we use --weave=false ...

@2opremio
Copy link
Contributor

So this is the only functionality (the dashboard monitoring for the weave-net) that we are going to lose if we use --weave=false ...

Yes

@rade rade added the chore Related to fix/refinement/improvement of end user or new/existing developer functionality label Jul 13, 2017
@rade rade added this to the 1.6 milestone Jul 13, 2017
@rade
Copy link
Member

rade commented Jul 17, 2017

Looks like what's left to do here is covered by #2634. -> closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore Related to fix/refinement/improvement of end user or new/existing developer functionality
Projects
None yet
Development

No branches or pull requests

3 participants