-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] Topology dashboard overcounting allocations and reserved resources. #9800
Comments
Curious - do you happen to have any prestart tasks? |
Hi @idrennanvmware. No, I'm not using any prestart task. |
The same is happening to me across multiple clusters since the 1.0 release candidates. We also have a high number of dead allocations taking up memory and space on the nomad server nodes until we manually GC them and restart the nodes. |
Hi @caiohcl, as @manveru is suggesting, this is due to terminal allocations. These allocations don't actually reserve resources in the mind of the scheduler which is how you're able to get to "151%" of reserved memory used. This is definitely a bug and it's naturally where I didn't write tests 😂 😭 ☠️ nomad/ui/tests/acceptance/topology-test.js Lines 8 to 10 in 2867e26
|
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Hi,
I recently updated Nomad to v1.0.1, and something caught my attention. I checked the topology dashboard, and under "Cluster Details" it shows that 151% of memory is currently in use. Another thing I noticed is that the Alloc number is different than what is shown in the graph.
For example, here it shows that client 046 has 7 Allocs, but the graph shows only 2:
As far as I can tell, the Topology is also using "completed" allocations to get the Cluster Details. I wonder if I'm missing some configuration or if this is the correct behavior?
Thank you!
The text was updated successfully, but these errors were encountered: