-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] Launch docker container with hard memory limit higher than specified in task resources #2093
Comments
Seem like a duplicate of #2082 :) |
@jippi If i understand correctly, #2082 is about not using resource limits for tasks at all. And, therefore, it is not clear how to allocate them. But I suggest that memory limit used for allocation planning and actual hard memory limit on docker container should not necessary be the same values. Plan for average consumption, launch with the worst case limit |
i think the underlying conclusion is the same.. if you don't binpack for worst-case, things can go south if all containers suddenly decide to max out at the same time.. then you either OOM entirely, or swap so bad you might as well be offline :) |
Yeah, they can go south. But it's always a compromise. Our application is not a bank. We can afford that low-probability risk with the benefit of better utilizing the resources and paying less for them. Why being so restrictive? |
I'm not core or hashicorp employed, so can't speak on their behalf - but personally i would prefer nomad to not allow me to step on my own toes at 3am because of something I did bad 3 months ago in a job spec - or one of my colleagues / rouge developer decided to do :) |
Hey @drscre, Nomad needs to know the peak usage so it can properly bin-pack. In the future Nomad will support over-subscription so that even though it has reserved that space on the node for your services it could re-use that space for other lower quality of services jobs like low priority batch jobs. For now Nomad does not have this. Thanks, |
For those who don't mind building Nomad from sources there is a trivial patch for Nomad 0.5.1 It adds "memory_mb" docker driver option which, if set to non-zero, overrides memory limit specified in task resources. https://gist.github.com/drscre/4b40668bb96081763f079085617e6056 |
This is a complete killer for us. We have ~7 containers that we are trying to set up as separate tasks inside a group. Unfortunately, these containers are peaky in terms of memory usage. This means that either we:
or
I'd like the emphasize that we are currently running these exact containers outside of Nomad without issue. As far as I'm concerned the resources denoted in the |
CPU resource limits are soft and can be exceeded - process gets throttled appropriately if there is too much contention. Memory should be handled similarly, no? I.e. we can have a limit for bin packing and another hard limit to protect form memory leaks etc. Docker supports both |
Looking forward to this as well, I'd like to decide when to shoot myself in the foot :) |
Coming soon in a 0.11.X release - stay tuned. |
Any update, friends? @dadgar ? |
Fixes #2093 Enable configuring `memory_hard_limit` in the docker config stanza for tasks. If set, this field will be passed to the container runtime as `--memory`, and the `memory` configuration from the task resource configuration will be passed as `--memory_reservation`, creating hard and soft memory limits for tasks using the docker task driver.
This solution works for Linux containers but Windows does not support MemoryReservation so the From what I can tell when using Windows containers the
I've tried passing in the
It would be nice if these two things were separated. Docker does not prevent me from starting 6 containers with Edit: I spoke too soon and didn't test this enough before posting, the |
@winstonhenke Reading through https://docs.docker.com/config/containers/resource_constraints/ I get the impression these config options are Linux specific on the Docker side. I think it's because Windows (and other OSs) don't have an analogous equivalent to Linux's support for |
@shoenig I updated my comment, I think I spoke too soon and was misunderstanding the |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
We are using Nomad do deploy microservices wrapped in docker containers.
But memory consumption of microservices is non-uniform.
For example, a microservice can consume on average, say, 50mb. But there is a heavy endpoint which is rarely called and consumes, say, 100mb.
We have to specify memory limit based on worst case. Most of the time memory is not fully utilized and we have to pay for more aws instances than we actually need.
Is there any way to tell Nomad "when planning microservice allocation assume 50mb usage, but actually launch container with 100mb limit / without memory limi "?
The text was updated successfully, but these errors were encountered: