Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kubernetes] Use pod annotations for service discovery #2794

Closed
mikekap opened this issue Aug 27, 2016 · 16 comments
Closed

[kubernetes] Use pod annotations for service discovery #2794

mikekap opened this issue Aug 27, 2016 · 16 comments
Assignees

Comments

@mikekap
Copy link
Contributor

mikekap commented Aug 27, 2016

Service discovery looks perfect for monitoring things running in kube. However, setting up another etcd cluster and keeping it up-to-date with deployments would get annoying.

Kubernetes lets you store arbitrary metadata in the annotations on a pod, which could include the datadog check config. That would be a pretty lean and powerful setup - you could switch monitoring configurations for your Deployment and get a rolling deploy of it. The metadata is stored in etcd at the end, so it's essentially the same setup. The nice part is there are less moving parts - you can reuse your existing "push to prod" logic to push monitoring logic.

mikekap added a commit to mikekap/dd-agent that referenced this issue Sep 15, 2016
…annotations.

This change makes the docker service discovery read the kubernetes annotations
to discover how to monitor a pod. This behavior is only triggered when a
service discovery backend isn't set. The 3 annotations looked for are:
 - `com.datadoghq.sd/check_names`
 - `com.datadoghq.sd/init_configs`
 - `com.datadoghq.sd/instances`

The semantics are exactly the same as that of a KV store.

Fixes DataDog#2794
mikekap added a commit to mikekap/dd-agent that referenced this issue Sep 15, 2016
…annotations.

This change makes the docker service discovery read the kubernetes annotations
to discover how to monitor a pod. This behavior is only triggered when a
service discovery backend isn't set. The 3 annotations looked for are:
 - `com.datadoghq.sd/check_names`
 - `com.datadoghq.sd/init_configs`
 - `com.datadoghq.sd/instances`

The semantics are exactly the same as that of a KV store.

Fixes DataDog#2794
mikekap added a commit to mikekap/dd-agent that referenced this issue Sep 15, 2016
…annotations.

This change makes the docker service discovery read the kubernetes annotations
to discover how to monitor a pod. This behavior is only triggered when a
service discovery backend isn't set. The 3 annotations looked for are:
 - `com.datadoghq.sd/check_names`
 - `com.datadoghq.sd/init_configs`
 - `com.datadoghq.sd/instances`

The semantics are exactly the same as that of a KV store.

Fixes DataDog#2794
mikekap added a commit to mikekap/dd-agent that referenced this issue Sep 16, 2016
…annotations.

This change makes the docker service discovery read the kubernetes annotations
to discover how to monitor a pod. This behavior is only triggered when a
service discovery backend isn't set. The 3 annotations looked for are:
 - `com.datadoghq.sd/check_names`
 - `com.datadoghq.sd/init_configs`
 - `com.datadoghq.sd/instances`

The semantics are exactly the same as that of a KV store.

Fixes DataDog#2794
@hkaj hkaj self-assigned this Sep 22, 2016
@Arachnid
Copy link

Is there any documentation or examples for this? What's the purpose of check_names in this context, given that presumably the container containing the metadata is the one you want to monitor?

@hkaj
Copy link
Member

hkaj commented Oct 13, 2016

Annotations apply to pods which can embed several containers. Even with a single container you may want to run several checks against it. For example an nginx container could require the nginx check and an http_check for a specific page it's serving.

Sadly the documentation is a bit behind here, we will update it to include explanations and examples for this feature asap.

@Arachnid
Copy link

Can you provide a really trivial example? As far as I can tell, in a vanilla docker setting, check_names refers to the expected names of the containers, but in the context of Kubernetes, we don't even know the name of the pod (if it's generated by a deployment), much less the name of the container.

@mikekap
Copy link
Contributor Author

mikekap commented Oct 13, 2016

check_names are the names of datadog checks (i.e. the name that you would give to the config file). Here's a trivial example that I'm using:

      annotations:
        com.datadoghq.sd/check_names: '["fluentd"]'
        com.datadoghq.sd/init_configs: '[{}]'
        com.datadoghq.sd/instances: '[{"monitor_agent_url": "http://%%host%%:24220/api/plugins.json", "tags": ["fluentd-role:aggregator"]}]'

@Arachnid
Copy link

Oops, yes, my mistake/misinterpretation. Thanks for clarifying!

@stvnwrgs
Copy link

stvnwrgs commented Nov 3, 2016

@mikekap I think I'm doing something wrong, but I cant figure out what.

I have a running GKE cluster with a default dd-daemonset. Added LOG_LEVEL=DEBUG as env.

First confusing thing is, I get the same output in my dd-pod logs as with INFO:
(Just to ensure im not doing something completly wrong, something is broken)

2016-11-03 22:12:39,363 CRIT Supervisor running as root (no user in config file)
2016-11-03 22:12:39,395 INFO RPC interface 'supervisor' initialized
2016-11-03 22:12:39,395 CRIT Server 'inet_http_server' running without any HTTP authentication checking
2016-11-03 22:12:39,396 INFO RPC interface 'supervisor' initialized
2016-11-03 22:12:39,396 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2016-11-03 22:12:39,396 INFO supervisord started with pid 1
2016-11-03 22:12:40,399 INFO spawned: 'dogstatsd' with pid 14
2016-11-03 22:12:40,400 INFO spawned: 'go-metro' with pid 15
2016-11-03 22:12:40,402 INFO spawned: 'forwarder' with pid 16
2016-11-03 22:12:40,403 INFO spawned: 'collector' with pid 17
2016-11-03 22:12:40,405 INFO spawned: 'jmxfetch' with pid 18
2016-11-03 22:12:43,333 INFO success: go-metro entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
2016-11-03 22:12:44,334 INFO success: jmxfetch entered RUNNING state, process has stayed up for > than 3 seconds (startsecs)
2016-11-03 22:12:44,856 INFO exited: jmxfetch (exit status 0; expected)
2016-11-03 22:12:45,445 INFO success: dogstatsd entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2016-11-03 22:12:45,445 INFO success: forwarder entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2016-11-03 22:12:45,446 INFO success: collector entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
2016-11-03 22:12:45,600 INFO exited: go-metro (exit status 0; expected)

After that I've added a redis pod+service with the following configuration:

apiVersion: v1
kind: Pod
metadata:
  name: redis
  annotations:
    com.datadoghq.sd/check_names: '["redis"]'
    com.datadoghq.sd/init_configs: '[{}]'
    com.datadoghq.sd/instances: '[{"host": "%%host%%", "port": 6379}]'
  labels:
    microservice: test
spec:
  containers:
    - name: redis
      image: redis
      ports:
        - containerPort: 6379

I can't see any metrics in my dd Redis dashboard. Do you have a clue what I'm doing wrong?

@mikekap
Copy link
Contributor Author

mikekap commented Nov 3, 2016

@stvnwrgs You probably want to check the dd-agent collector logs - you'd want to exec into the dd-agent container and cat /var/log/datadog-agent/collector.log. You may see something useful there.

Separately, you may want to be careful depending on this (undocumented) feature for now since #2901 changes it a little (the annotations are different).

@stvnwrgs
Copy link

stvnwrgs commented Nov 3, 2016

@mikekap: Thanks for the tip!
I guess, I also forgot to set the backend.

- name: SD_BACKEND
   value: docker

Do I also have to set SD_CONFIG_BACKEND? From the commits it's not quite clear for me.

Edit: I get a lot of:

2016-11-03 22:47:14 UTC | DEBUG | dd.collector | utils.service_discovery.sd_docker_backend(sd_docker_backend.py:289) | No config template for container 398b36457308 with identifier gcr.io/google_containers/kube-proxy:604c3cbc73e98642406245f4fff461ee. It will be left unconfigured.
2016-11-03 22:47:14 UTC | WARNING | dd.collector | utils.service_discovery.sd_docker_backend(sd_docker_backend.py:318) | No supported configuration backend was provided, using auto-config only.
2016-11-03 22:47:14 UTC | DEBUG | dd.collector | utils.service_discovery.abstract_config_store(abstract_config_store.py:168) | No auto config was found for image gcr.io/google_containers/pause-amd64:3.0, leaving it alone.

And

2016-11-03 22:46:53 UTC | DEBUG | dd.collector | utils.service_discovery.sd_docker_backend(sd_docker_backend.py:97) | Couldn't find the IP address for container dbafb5db0ae5 (redis), using the kubernetes way.
2016-11-03 22:47:13 UTC | DEBUG | dd.collector | utils.service_discovery.sd_docker_backend(sd_docker_backend.py:89) | No IP address was found in container dbafb5db0ae5 (redis) networks, trying with the IPAddress field
2016-11-03 22:47:13 UTC | DEBUG | dd.collector | utils.service_discovery.sd_docker_backend(sd_docker_backend.py:97) | Couldn't find the IP address for container dbafb5db0ae5 (redis), using the kubernetes way.
2016-11-03 22:47:14 UTC | DEBUG | dd.collector | utils.service_discovery.sd_docker_backend(sd_docker_backend.py:89) | No IP address was found in container dbafb5db0ae5 (redis) networks, trying with the IPAddress field

@mikekap
Copy link
Contributor Author

mikekap commented Nov 3, 2016

I have two environment variables:

        - name: SD_BACKEND
          value: docker
        - name: KUBERNETES
          value: "yes"

Other than that, it looks like it should work.

@stvnwrgs
Copy link

stvnwrgs commented Nov 3, 2016

Well there is still nothing to see in my dashboard. Shouldn't there be any logs like: "redis metric collector started, gained matrics , pushed metrics" etc ?

Here is a little bit more output from the collector:

2016-11-03 23:47:47 UTC | DEBUG | dd.collector | collector(agent.py:305) | Sleeping for 15 seconds
2016-11-03 23:48:02 UTC | DEBUG | dd.collector | checks.collector(collector.py:255) | Found 4 checks
2016-11-03 23:48:02 UTC | DEBUG | dd.collector | checks.collector(collector.py:260) | Starting collection run #186
2016-11-03 23:48:02 UTC | DEBUG | dd.collector | utils.subprocess_output(subprocess_output.py:63) | Popen(['iostat', '-d', '1', '2', '-x', '-k'], close_fds = True, shell = False, stdout = <open file '<fdopen>', mode 'w+b' at 0x7f73919d9270>, stderr = <open file '<fdopen>', mode 'w+b' at 0x7f73919d90c0>, stdin = None) called
2016-11-03 23:48:03 UTC | DEBUG | dd.collector | utils.subprocess_output(subprocess_output.py:63) | Popen(['ps', 'auxww'], close_fds = True, shell = False, stdout = <open file '<fdopen>', mode 'w+b' at 0x7f73919d90c0>, stderr = <open file '<fdopen>', mode 'w+b' at 0x7f73919d9270>, stdin = None) called
2016-11-03 23:48:03 UTC | DEBUG | dd.collector | utils.subprocess_output(subprocess_output.py:63) | Popen(['mpstat', '1', '3'], close_fds = True, shell = False, stdout = <open file '<fdopen>', mode 'w+b' at 0x7f73919d90c0>, stderr = <open file '<fdopen>', mode 'w+b' at 0x7f73919d9b70>, stdin = None) called
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(unix.py:601) | Cannot extract cpu value %user from ['Average:', 'all', '0.75', '0.08', '1.08', '0.00', '0.00', '0.08', '0.00', '0.00', '0.00', '98.00'] (['23:48:04', 'CPU', '%usr', '%nice', '%sys', '%iowait', '%irq', '%soft', '%steal', '%guest', '%gnice', '%idle'])
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:389) | Running check ntp
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.ntp(__init__.py:763) | Not running instance #0 of check ntp as it ran less than 900s ago
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | aggregator(aggregator.py:957) | received 0 payloads since last flush
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:451) | Check ntp ran in 0.00 s
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:389) | Running check disk
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: overlay
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: tmpfs
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: tmpfs
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: /dev/sda1
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: /dev/sda1
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: /dev/sda1
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: run
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: tmpfs
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: tmpfs
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: /dev/sda1
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: /dev/sda1
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: /dev/sda1
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: shm
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: tmpfs
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.disk(disk.py:116) | Passed: tmpfs
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | aggregator(aggregator.py:957) | received 0 payloads since last flush
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:451) | Check disk ran in 0.07 s
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:389) | Running check kubernetes
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:218) | Subcontainer, doesn't have any labels
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:303) | Unable to retrieve container limits for dd-agent: 'limits'
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:304) | Container object for dd-agent: {u'terminationMessagePath': u'/dev/termination-log', u'name': u'dd-agent', u'image': u'datadog/docker-dd-agent:latest', u'volumeMounts': [{u'mountPath': u'/var/run/docker.sock', u'name': u'dockersocket'}, {u'readOnly': True, u'mountPath': u'/host/proc', u'name': u'procdir'}, {u'readOnly': True, u'mountPath': u'/host/sys/fs/cgroup', u'name': u'cgroups'}, {u'readOnly': True, u'mountPath': u'/var/run/secrets/kubernetes.io/serviceaccount', u'name': u'default-token-cwrug'}], u'env': [{u'name': u'API_KEY', u'value': u'xxxxxxx'}, {u'name': u'LOG_LEVEL', u'value': u'DEBUG'}, {u'name': u'KUBERNETES', u'value': u'yes'}, {u'name': u'SD_BACKEND', u'value': u'docker'}], u'imagePullPolicy': u'Always', u'ports': [{u'protocol': u'UDP', u'name': u'dogstatsdport', u'containerPort': 8125}], u'resources': {u'requests': {u'cpu': u'100m'}}}
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:303) | Unable to retrieve container limits for dnsmasq: 'limits'
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:304) | Container object for dnsmasq: {u'livenessProbe': {u'httpGet': {u'path': u'/healthz-dnsmasq', u'scheme': u'HTTP', u'port': 8080}, u'timeoutSeconds': 5, u'initialDelaySeconds': 60, u'periodSeconds': 10, u'successThreshold': 1, u'failureThreshold': 5}, u'terminationMessagePath': u'/dev/termination-log', u'name': u'dnsmasq', u'image': u'eu.gcr.io/google_containers/kube-dnsmasq-amd64:1.4', u'args': [u'--cache-size=1000', u'--no-resolv', u'--server=127.0.0.1#10053', u'--log-facility=-'], u'volumeMounts': [{u'readOnly': True, u'mountPath': u'/var/run/secrets/kubernetes.io/serviceaccount', u'name': u'default-token-eajek'}], u'imagePullPolicy': u'IfNotPresent', u'ports': [{u'protocol': u'UDP', u'name': u'dns', u'containerPort': 53}, {u'protocol': u'TCP', u'name': u'dns-tcp', u'containerPort': 53}], u'resources': {}}
2016-11-03 23:48:07 UTC | ERROR | dd.collector | checks.kubernetes(kubernetes.py:315) | Unable to retrieve container requests for dnsmasq: 'requests'
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:316) | Container object for dnsmasq: {u'livenessProbe': {u'httpGet': {u'path': u'/healthz-dnsmasq', u'scheme': u'HTTP', u'port': 8080}, u'timeoutSeconds': 5, u'initialDelaySeconds': 60, u'periodSeconds': 10, u'successThreshold': 1, u'failureThreshold': 5}, u'terminationMessagePath': u'/dev/termination-log', u'name': u'dnsmasq', u'image': u'eu.gcr.io/google_containers/kube-dnsmasq-amd64:1.4', u'args': [u'--cache-size=1000', u'--no-resolv', u'--server=127.0.0.1#10053', u'--log-facility=-'], u'volumeMounts': [{u'readOnly': True, u'mountPath': u'/var/run/secrets/kubernetes.io/serviceaccount', u'name': u'default-token-eajek'}], u'imagePullPolicy': u'IfNotPresent', u'ports': [{u'protocol': u'UDP', u'name': u'dns', u'containerPort': 53}, {u'protocol': u'TCP', u'name': u'dns-tcp', u'containerPort': 53}], u'resources': {}}
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:303) | Unable to retrieve container limits for redis: 'limits'
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:304) | Container object for redis: {u'terminationMessagePath': u'/dev/termination-log', u'name': u'redis', u'image': u'redis', u'volumeMounts': [{u'readOnly': True, u'mountPath': u'/var/run/secrets/kubernetes.io/serviceaccount', u'name': u'default-token-cwrug'}], u'imagePullPolicy': u'Always', u'ports': [{u'protocol': u'TCP', u'containerPort': 6379}], u'resources': {u'requests': {u'cpu': u'100m'}}}
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:303) | Unable to retrieve container limits for kube-proxy: 'limits'
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:304) | Container object for kube-proxy: {u'terminationMessagePath': u'/dev/termination-log', u'name': u'kube-proxy', u'image': u'gcr.io/google_containers/kube-proxy:604c3cbc73e98642406245f4fff461ee', u'volumeMounts': [{u'readOnly': True, u'mountPath': u'/etc/ssl/certs', u'name': u'ssl-certs-host'}, {u'mountPath': u'/var/log', u'name': u'varlog'}, {u'mountPath': u'/var/lib/kube-proxy/kubeconfig', u'name': u'kubeconfig'}], u'command': [u'/bin/sh', u'-c', u'kube-proxy --master=https://130.211.69.224 --kubeconfig=/var/lib/kube-proxy/kubeconfig --cluster-cidr=10.112.0.0/14 --resource-container="" --v=2 1>>/var/log/kube-proxy.log 2>&1'], u'imagePullPolicy': u'IfNotPresent', u'securityContext': {u'privileged': True}, u'resources': {u'requests': {u'cpu': u'100m'}}}
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:337) | Unable to retrieve pod kind for pod {u'status': {u'containerStatuses': [{u'restartCount': 0, u'name': u'redis', u'image': u'redis', u'imageID': u'docker://sha256:74b99a81add5d77beab8af2507083fb62117362a238c7cfe38b876e4e17f2c7a', u'state': {u'running': {u'startedAt': u'2016-11-03T22:46:09Z'}}, u'ready': True, u'lastState': {}, u'containerID': u'docker://dbafb5db0ae5eb429a708b7e00077371cbbd36c46bdd6d288c1f106443b9052a'}], u'podIP': u'10.112.0.16', u'startTime': u'2016-11-03T22:46:08Z', u'hostIP': u'10.240.0.4', u'phase': u'Running', u'conditions': [{u'status': u'True', u'lastProbeTime': None, u'type': u'Initialized', u'lastTransitionTime': u'2016-11-03T22:46:08Z'}, {u'status': u'True', u'lastProbeTime': None, u'type': u'Ready', u'lastTransitionTime': u'2016-11-03T22:46:09Z'}, {u'status': u'True', u'lastProbeTime': None, u'type': u'PodScheduled', u'lastTransitionTime': u'2016-11-03T22:46:08Z'}]}, u'spec': {u'dnsPolicy': u'ClusterFirst', u'securityContext': {}, u'serviceAccountName': u'default', u'serviceAccount': u'default', u'terminationGracePeriodSeconds': 30, u'restartPolicy': u'Always', u'volumes': [{u'secret': {u'defaultMode': 420, u'secretName': u'default-token-cwrug'}, u'name': u'default-token-cwrug'}], u'containers': [{u'terminationMessagePath': u'/dev/termination-log', u'name': u'redis', u'image': u'redis', u'volumeMounts': [{u'readOnly': True, u'mountPath': u'/var/run/secrets/kubernetes.io/serviceaccount', u'name': u'default-token-cwrug'}], u'imagePullPolicy': u'Always', u'ports': [{u'protocol': u'TCP', u'containerPort': 6379}], u'resources': {u'requests': {u'cpu': u'100m'}}}], u'nodeName': u'gke-cluster-1-default-pool-fbdd010c-xw99'}, u'metadata': {u'name': u'redis', u'labels': {u'microservice': u'test'}, u'namespace': u'default', u'resourceVersion': u'5473', u'creationTimestamp': u'2016-11-03T22:46:08Z', u'annotations': {u'com.datadoghq.sd/init_configs': u'[{}]', u'com.datadoghq.sd/instances': u'[{"host": "%%host%%", "port": 6379}]', u'kubernetes.io/config.seen': u'2016-11-03T22:46:08.920290648Z', u'com.datadoghq.sd/check_names': u'["redis"]', u'kubernetes.io/config.source': u'api', u'kubernetes.io/limit-ranger': u'LimitRanger plugin set: cpu request for container redis'}, u'selfLink': u'/api/v1/namespaces/default/pods/redis', u'uid': u'4fd7ffd3-a217-11e6-802c-42010a840fc5'}}: 'kubernetes.io/created-by'
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.kubernetes(kubernetes.py:337) | Unable to retrieve pod kind for pod {u'status': {u'phase': u'Pending', u'conditions': [{u'status': u'True', u'lastProbeTime': None, u'type': u'PodScheduled', u'lastTransitionTime': u'2016-11-03T21:33:30Z'}]}, u'spec': {u'dnsPolicy': u'ClusterFirst', u'securityContext': {}, u'nodeName': u'gke-cluster-1-default-pool-fbdd010c-xw99', u'hostNetwork': True, u'terminationGracePeriodSeconds': 30, u'restartPolicy': u'Always', u'volumes': [{u'hostPath': {u'path': u'/usr/share/ca-certificates'}, u'name': u'ssl-certs-host'}, {u'hostPath': {u'path': u'/var/lib/kube-proxy/kubeconfig'}, u'name': u'kubeconfig'}, {u'hostPath': {u'path': u'/var/log'}, u'name': u'varlog'}], u'containers': [{u'terminationMessagePath': u'/dev/termination-log', u'name': u'kube-proxy', u'image': u'gcr.io/google_containers/kube-proxy:604c3cbc73e98642406245f4fff461ee', u'volumeMounts': [{u'readOnly': True, u'mountPath': u'/etc/ssl/certs', u'name': u'ssl-certs-host'}, {u'mountPath': u'/var/log', u'name': u'varlog'}, {u'mountPath': u'/var/lib/kube-proxy/kubeconfig', u'name': u'kubeconfig'}], u'command': [u'/bin/sh', u'-c', u'kube-proxy --master=https://130.211.69.224 --kubeconfig=/var/lib/kube-proxy/kubeconfig --cluster-cidr=10.112.0.0/14 --resource-container="" --v=2 1>>/var/log/kube-proxy.log 2>&1'], u'imagePullPolicy': u'IfNotPresent', u'securityContext': {u'privileged': True}, u'resources': {u'requests': {u'cpu': u'100m'}}}]}, u'metadata': {u'name': u'kube-proxy-gke-cluster-1-default-pool-fbdd010c-xw99', u'labels': {u'tier': u'node', u'component': u'kube-proxy'}, u'namespace': u'kube-system', u'creationTimestamp': None, u'annotations': {u'kubernetes.io/config.hash': u'0853156d068e1093a27bf23c08ff7c31', u'kubernetes.io/config.source': u'file', u'kubernetes.io/config.seen': u'2016-11-03T21:31:54.993840051Z'}, u'selfLink': u'/api/v1/pods/namespaces/kube-proxy-gke-cluster-1-default-pool-fbdd010c-xw99/kube-system', u'uid': u'0853156d068e1093a27bf23c08ff7c31'}}: 'kubernetes.io/created-by'
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | aggregator(aggregator.py:957) | received 0 payloads since last flush
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:451) | Check kubernetes ran in 0.12 s
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:389) | Running check docker_daemon
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | docker.auth.auth(auth.py:189) | File doesn't exist
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/a656c81371c2c848bade60d2959518019f8dcabaeb1b78621c2292257154354a/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/a656c81371c2c848bade60d2959518019f8dcabaeb1b78621c2292257154354a/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/a656c81371c2c848bade60d2959518019f8dcabaeb1b78621c2292257154354a/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/a656c81371c2c848bade60d2959518019f8dcabaeb1b78621c2292257154354a/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/a656c81371c2c848bade60d2959518019f8dcabaeb1b78621c2292257154354a/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/042c45ec33daf95783431e8de37b6f24d386a0f463be20104c1ad73f2bd75214/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/042c45ec33daf95783431e8de37b6f24d386a0f463be20104c1ad73f2bd75214/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/042c45ec33daf95783431e8de37b6f24d386a0f463be20104c1ad73f2bd75214/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/042c45ec33daf95783431e8de37b6f24d386a0f463be20104c1ad73f2bd75214/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/042c45ec33daf95783431e8de37b6f24d386a0f463be20104c1ad73f2bd75214/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/521df2541d58ca020ac0346fd767dc15259bd99a4196996df221399eadd0af21/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/521df2541d58ca020ac0346fd767dc15259bd99a4196996df221399eadd0af21/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/521df2541d58ca020ac0346fd767dc15259bd99a4196996df221399eadd0af21/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/521df2541d58ca020ac0346fd767dc15259bd99a4196996df221399eadd0af21/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/521df2541d58ca020ac0346fd767dc15259bd99a4196996df221399eadd0af21/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/b322e60ec3d74722a159eeac2c02bc5c4640c0af3a78e237afcd2799017cb818/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/b322e60ec3d74722a159eeac2c02bc5c4640c0af3a78e237afcd2799017cb818/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/b322e60ec3d74722a159eeac2c02bc5c4640c0af3a78e237afcd2799017cb818/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/b322e60ec3d74722a159eeac2c02bc5c4640c0af3a78e237afcd2799017cb818/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/b322e60ec3d74722a159eeac2c02bc5c4640c0af3a78e237afcd2799017cb818/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/16ece4416d701c4d5d39f08a63ba3e568787c87dc4a114783a0e6d6405a1d21f/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/16ece4416d701c4d5d39f08a63ba3e568787c87dc4a114783a0e6d6405a1d21f/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/16ece4416d701c4d5d39f08a63ba3e568787c87dc4a114783a0e6d6405a1d21f/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/16ece4416d701c4d5d39f08a63ba3e568787c87dc4a114783a0e6d6405a1d21f/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/16ece4416d701c4d5d39f08a63ba3e568787c87dc4a114783a0e6d6405a1d21f/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/815641a9543533529454b69a17dea66df73fbbe7630aba513c52d5c8adf27c65/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/815641a9543533529454b69a17dea66df73fbbe7630aba513c52d5c8adf27c65/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/815641a9543533529454b69a17dea66df73fbbe7630aba513c52d5c8adf27c65/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/815641a9543533529454b69a17dea66df73fbbe7630aba513c52d5c8adf27c65/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/815641a9543533529454b69a17dea66df73fbbe7630aba513c52d5c8adf27c65/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/e70eed2acc73e955ccd712fde975e321d959b1ccc9cfac07b3d2d6faabf66f64/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/e70eed2acc73e955ccd712fde975e321d959b1ccc9cfac07b3d2d6faabf66f64/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/e70eed2acc73e955ccd712fde975e321d959b1ccc9cfac07b3d2d6faabf66f64/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/e70eed2acc73e955ccd712fde975e321d959b1ccc9cfac07b3d2d6faabf66f64/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/e70eed2acc73e955ccd712fde975e321d959b1ccc9cfac07b3d2d6faabf66f64/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/dbafb5db0ae5eb429a708b7e00077371cbbd36c46bdd6d288c1f106443b9052a/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/dbafb5db0ae5eb429a708b7e00077371cbbd36c46bdd6d288c1f106443b9052a/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/dbafb5db0ae5eb429a708b7e00077371cbbd36c46bdd6d288c1f106443b9052a/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/dbafb5db0ae5eb429a708b7e00077371cbbd36c46bdd6d288c1f106443b9052a/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/dbafb5db0ae5eb429a708b7e00077371cbbd36c46bdd6d288c1f106443b9052a/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/398b36457308b77e01e073e6b1112e0ff13584cd1cda0e79b288b9a00058ad5c/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/398b36457308b77e01e073e6b1112e0ff13584cd1cda0e79b288b9a00058ad5c/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/398b36457308b77e01e073e6b1112e0ff13584cd1cda0e79b288b9a00058ad5c/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/398b36457308b77e01e073e6b1112e0ff13584cd1cda0e79b288b9a00058ad5c/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/398b36457308b77e01e073e6b1112e0ff13584cd1cda0e79b288b9a00058ad5c/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/02e1048cc0fbaa334fa15ec293c7b54925ced6e8d097946e5d543b5c7e3e2f68/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/02e1048cc0fbaa334fa15ec293c7b54925ced6e8d097946e5d543b5c7e3e2f68/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/02e1048cc0fbaa334fa15ec293c7b54925ced6e8d097946e5d543b5c7e3e2f68/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/02e1048cc0fbaa334fa15ec293c7b54925ced6e8d097946e5d543b5c7e3e2f68/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/02e1048cc0fbaa334fa15ec293c7b54925ced6e8d097946e5d543b5c7e3e2f68/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/089c494b574abc13777ff23385c5bd55761afcdc146348343e5e8e9a0321db23/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/089c494b574abc13777ff23385c5bd55761afcdc146348343e5e8e9a0321db23/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/089c494b574abc13777ff23385c5bd55761afcdc146348343e5e8e9a0321db23/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/089c494b574abc13777ff23385c5bd55761afcdc146348343e5e8e9a0321db23/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/089c494b574abc13777ff23385c5bd55761afcdc146348343e5e8e9a0321db23/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/25e519646e63390c2739d9aa4b6c2d7f351acb1df6d18f4ee13cdf64a830b927/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/25e519646e63390c2739d9aa4b6c2d7f351acb1df6d18f4ee13cdf64a830b927/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/25e519646e63390c2739d9aa4b6c2d7f351acb1df6d18f4ee13cdf64a830b927/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/25e519646e63390c2739d9aa4b6c2d7f351acb1df6d18f4ee13cdf64a830b927/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/25e519646e63390c2739d9aa4b6c2d7f351acb1df6d18f4ee13cdf64a830b927/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/ff006fa3deba227fa61aa089f90155284f785e7afbf7294b8fd1f24f39851051/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/ff006fa3deba227fa61aa089f90155284f785e7afbf7294b8fd1f24f39851051/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/ff006fa3deba227fa61aa089f90155284f785e7afbf7294b8fd1f24f39851051/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/ff006fa3deba227fa61aa089f90155284f785e7afbf7294b8fd1f24f39851051/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/ff006fa3deba227fa61aa089f90155284f785e7afbf7294b8fd1f24f39851051/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/13b78344048ad31d3484c3b776ae57336f45606553d6511f475c0e6582a2f07b/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/13b78344048ad31d3484c3b776ae57336f45606553d6511f475c0e6582a2f07b/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/13b78344048ad31d3484c3b776ae57336f45606553d6511f475c0e6582a2f07b/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/13b78344048ad31d3484c3b776ae57336f45606553d6511f475c0e6582a2f07b/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/13b78344048ad31d3484c3b776ae57336f45606553d6511f475c0e6582a2f07b/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/c65de70508a24346f1ef879f26b2091b31433fb7d43d0e03cc768092927b5822/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/c65de70508a24346f1ef879f26b2091b31433fb7d43d0e03cc768092927b5822/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/c65de70508a24346f1ef879f26b2091b31433fb7d43d0e03cc768092927b5822/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/c65de70508a24346f1ef879f26b2091b31433fb7d43d0e03cc768092927b5822/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/c65de70508a24346f1ef879f26b2091b31433fb7d43d0e03cc768092927b5822/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/0a0595bbd5129d1324235324159e1214dca7aefa614918645fa88b6c751c5107/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/0a0595bbd5129d1324235324159e1214dca7aefa614918645fa88b6c751c5107/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/0a0595bbd5129d1324235324159e1214dca7aefa614918645fa88b6c751c5107/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/0a0595bbd5129d1324235324159e1214dca7aefa614918645fa88b6c751c5107/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/0a0595bbd5129d1324235324159e1214dca7aefa614918645fa88b6c751c5107/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/memory/6a40faaf036c9574d30ca10c2e9470151b8a9f223c554d033b5e2870ac360d0b/memory.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_limit, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:555) | Couldn't compute docker.mem.sw_in_use, some keys were missing.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu,cpuacct/6a40faaf036c9574d30ca10c2e9470151b8a9f223c554d033b5e2870ac360d0b/cpuacct.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/cpu/6a40faaf036c9574d30ca10c2e9470151b8a9f223c554d033b5e2870ac360d0b/cpu.stat
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.docker_daemon(docker_daemon.py:776) | Opening cgroup file: /host/sys/fs/cgroup/blkio/6a40faaf036c9574d30ca10c2e9470151b8a9f223c554d033b5e2870ac360d0b/blkio.throttle.io_service_bytes
2016-11-03 23:48:07 UTC | INFO | dd.collector | checks.docker_daemon(docker_daemon.py:785) | Can't open /host/sys/fs/cgroup/blkio/6a40faaf036c9574d30ca10c2e9470151b8a9f223c554d033b5e2870ac360d0b/blkio.throttle.io_service_bytes. Metrics for this container are skipped.
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | aggregator(aggregator.py:957) | received 0 payloads since last flush
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:451) | Check docker_daemon ran in 0.05 s
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | aggregator(aggregator.py:957) | received 0 payloads since last flush
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(emitter.py:66) | http_emitter: attempting postback to http://localhost:17123
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(emitter.py:90) | payload_size=215547, compressed_size=8552, compression_ratio=25.204
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(emitter.py:105) | Payload accepted
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.check_status(check_status.py:136) | Persisting status to /opt/datadog-agent/run/CollectorStatus.pickle
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | checks.collector(collector.py:520) | Finished run #186. Collection time: 4.65s. Emit time: 0.02s
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | util(util.py:205) | Resetting watchdog for 150
2016-11-03 23:48:07 UTC | DEBUG | dd.collector | collector(agent.py:305) | Sleeping for 15 seconds

@mikekap
Copy link
Contributor Author

mikekap commented Nov 4, 2016

One thing of note is the collector doesn't pick up annotation changes when the container doesn't restart (i.e. if you just edit the pod and add annotations, dd-agent won't pick that up). Could you try restarting dd-agent and post the logs from the collector's launch?

@stvnwrgs
Copy link

stvnwrgs commented Nov 4, 2016

I did. Same behavior as before.

@hkaj
Copy link
Member

hkaj commented Nov 4, 2016

Hi @stvnwrgs
Could you send a flare from this agent to support? From the logs you attached it doesn't look like service discovery is enabled.

@stvnwrgs
Copy link

@hkaj Opened the ticket, but still no response... https://help.datadoghq.com/hc/en-us/requests/70927

More documentation about this would be really helpful! The docs don't describe kubernetes as backend.

@mgood
Copy link

mgood commented Jan 21, 2017

@stvnwrgs in case you or someone else is still running into this, the feature is documented now, but it is not in a released version yet. It should be part of the 5.11 release. I had submitted a support ticket since I was confused by this too. I had just installed the latest agent so since it was included in the docs I assumed it was already supported.

@hkaj
Copy link
Member

hkaj commented Jan 23, 2017

@mgood yeah sorry about that. We pulled the trigger on documentation too early. We updated it last week to clarify that it will be available starting 5.11.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants