-
Notifications
You must be signed in to change notification settings - Fork 814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kubernetes] add kubernetes.pods.running metric #2277
Conversation
@@ -5,6 +5,8 @@ | |||
import numbers | |||
from fnmatch import fnmatch | |||
import re | |||
import json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we usually work with simplejson instead:
import simplejson as json
bc90562
to
6406350
Compare
Great first PR @masci Could we refactor that so that we don't query it twice for the same data ? |
# at the moment kubelet api reports data only for the current node, | ||
# we expect exactly one item in the set. | ||
for node_name in set(pods): | ||
_tags.append('node_name:{0}'.format(node_name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no need to tag by node_name. As you mentioned, the kubelet will just return the data for the current node, and we already have the host name passed automatically so you don't have to add this tag.
…hosts added tests and fixtures for pods.running metric fixed typo cleaned up code python2.6 compat fix added checks in test_historate fixed reviewed items allow caller to avoid strings escaping while reading fixture files refactoring: call kubelet api only once for labels and running pods remove node_name tag, we already have host tag carrying same info
nice optimization, you can squash your commits and 🚢 it. It would also be cool to rebase it with master to make sure the tests are passing. |
600a389
to
c37b244
Compare
Using the kubelet api, query the number of pods and group them by the Replication Controller that spawned them.
Report the number of pods spawned by each RC, tagging with
kube_replication_controller
andnode_name
thus allowing sums.