-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker memory_hard_limit bypasses quotas #9924
Comments
Hi @henrikjohansen! This looks like a general misfeature of how we handle the Thanks for opening this issue -- feedback like this on ENT features is hugely valuable! (cc'ing @mikenomitch as a heads up) |
Hi @tgross. There are a least 3 issues here I think? 🤔
|
The
This unfortunately still seems to be the case and is a bug we intend to fix. A config option will be added to disable |
My plan right now is to remove Docker's memory_hard_limit outright since it is superseded by Roadmap
The ugly part is that since this is in driver config we don't normally validate it on the server. We'll need to add a special case to peek in for this particular field. Feedback welcome! I'd love to "rip off the bandaid" with this one as it were and not add more features when we could just remove the deprecated one. |
@schmichael As per 1.9.x the I am going to close this issue - any ENT customer running into this issue also has Sentinel available as a makeshift band-aid. |
Nomad version
Nomad v1.0.2+ent (8b533db)
Issue
It seems like quota accounting is done during job submission in respect to resources declared in the resource stanza. Quota limits for memory can thus be negated by the job operator using the memory_hard_limit task config option :(
PoC
This quota limits the default namespace to 4096MB memory :
This jobspec declares a limit of 256MB memory and sets memory_hard_limit to twice the allowed memory consumption (8192MB).
After you plan & run the job the quota looks like this :
This is not desired behavior - at least not for us. Yes, we can block jobs from setting memory_hard_limit using sentinel policies but we have use-cases where this is needed (and you need to realize this is possible in the first place).
In reality, memory_hard_limit should count towards quota consumption just like the ordinary resources declaration.
The text was updated successfully, but these errors were encountered: