-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
recommender Error adding metric sample for container #2010
Comments
Does it cause any visible issues? This is expected if we get a sample for a pod that stopped to exist in the meantime. |
kube-apiserver --version apiVersion: v1
|
@qist Sorry, but I do not understand what problem you are facing. Is the VPA misbehaving in any way or is it just the warning message worrying you? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I am also seeing this issue. To be more specific, i am using VPA release 0.8, and I have VPAs set up for only a tiny portion of Deployments in our cluster. When the Recommender first starts up and inits history from Prometheus, it logs is filled with error like
The error logs seems to come from code: https://github.com/kubernetes/autoscaler/blob/vpa-release-0.8/vertical-pod-autoscaler/pkg/recommender/input/cluster_feeder.go#L219-L224, and the error message for each container repeats itself one or multiple times, which seems to suggest that Recommender was able to get Pod History samples from Prometheus, but Containers are not initialized correctly in the ClusterState: autoscaler/vertical-pod-autoscaler/pkg/recommender/model/cluster.go Lines 211 to 213 in bfee828
The errors log only exist when the Recommender first starts up, and won't repeat itself at the later cycles. Have anyone else seen this problem? Functionally it is not a blocker, but it is very annoying because whenever I query Recommender logs, these errors will flush to the screen and then the useful info starts later. |
Yes, same here. Also just stumbled upon it, thinking I got my label parameters wrong. |
I am still getting "Error adding metric sample for container" in our logs. |
I'm having the same error as well, it has appeared after a cluster upgrade to 1.24. |
{"log":"W0511 06:07:10.273944 6 cluster_feeder.go:386] Error adding metric sample for container {{default my-rec-deployment-55c8bd8657-j5fmp} POD}: KeyError: {{default my-rec-deployment-55c8bd8657-j5fmp} POD}\n","stream":"stderr","time":"2019-05-11T06:07:10.276302417Z"}
The text was updated successfully, but these errors were encountered: