-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Flux using >50% of one CPU #926
Comments
I have bumped our sock shop cluster to the latest flux (1.2.2) and all other latest weave agents. The mega yaml apply which was previously broken (due to #916) now works. Alas flux is still using about 60% of a core. According to Explore, about half of that is kubectl'ing. It is |
No; left alone, it should run every few minutes. It will be triggered by
So: either the tick has been tuned down radically, or there are constantly new images and the automation interval has been tuned down radically (or, there's a bug). Is there an indication in the logs of what triggers the sync? I'd expect some messages about jobs running if it's those or automation. |
Attaching logs provided by a flux user who experienced this issue |
(bear in mind we may be looking at two distinct problems here, in the two instances we are looking at) The sock-shop fluxd has no special tuning. I can see no obvious trigger in the logs for the
|
Right, no messages about running jobs. It does look an awful lot like it's just applying the namespace, applying everything else, then goto 10. |
A naive attempt to reproduce this locally fails: I created a fresh minikube and pointed flux at the flux-example repo, and it behaves as expected. So there's something else going on. |
The log attached above also shows the sync running every five minutes, when left to its own devices. Perhaps the CPU use there has a different cause. |
I could reproduce this on GKE with the following repo https://github.com/stefanprodan/flux-demo I'm using a n1-standard-2 node and after deploying Flux the average CPU usage increased by ~50%. This is what Flux logs every minute:
If I remove the automated deployment the CPU usage stays the same:
|
pprof top:
|
Good sleuthing @stefanprodan! (For the sake of completeness, I should record that the sock shop was running |
- added the ability to stop the refresh loop - added priority queuing to the refresh loop
Noticing multiple
kubectl apply
every second.This occurs with v1.2.0 and v1.2.1.
The text was updated successfully, but these errors were encountered: