Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLOC-4530] Pass the swappiness configuration value to the Docker API #2946

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

wallrj
Copy link
Contributor

@wallrj wallrj commented Nov 9, 2016

Fixes: https://clusterhq.atlassian.net/browse/FLOC-4530

In #2818 we added swappiness parameter to the container API but the implementation wasn't complete.
The swappiness was not passed through to the Docker API.
And the swappiness was not inspected by the ApplicationDeployer.

This branch adds that missing piece and a functional test that demonstrates that swappiness is set and discovered.

I haven't added an acceptance test for this because we generally trust that Docker will do the right thing if it's passed the appropriate flags.

You can see the swappiness configuration of a container by doing something like:

docker run --memory-swappiness=33 --rm busybox cat /sys/fs/cgroup/memory/memory.swappiness

Or inspecting the /sys/fs/cgroups on the host

$ container_id=$(docker run --memory-swappiness=44 --detach busybox sleep 100)
$ cat /sys/fs/cgroup/memory/docker/$container_id/memory.swappiness 
44

I haven't found a good way to see whether the dockerized process does get swapped out with values of 0 and >0.

Copy link
Contributor

@wallnerryan wallnerryan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks pretty good, didnt review in a lot of detail.

Docker reference memory-swappiness is inherited from the parent if not set, i didnt see a default setting so i assume evaluating None does this somehow?

By default, if you are not using --memory-swappiness, memory swappiness value will be inherited from the parent. - https://docs.docker.com/engine/reference/run/#/swappiness-constraint

Also, this only matters for swisscom using the container agent right? Once they switch over to kibernetes they can set swappiness via POD specs

cpu_shares = None if cpu_shares == 0 else cpu_shares
mem_limit = data[u"Config"][u"Memory"]
mem_limit = data[u"HostConfig"][u"Memory"]
mem_limit = None if mem_limit == 0 else mem_limit
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to do a similar thing for swappiness here?

application_name = u'site-example.com'
application = Application(
name=application_name,
image=DockerImage(repository=u'clusterhq/postgresql',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why dont we use a smaller image here? Any reason we are using postgresql just to test the swappiness? Could mean longer pulls for new test nodes.

@wallrj
Copy link
Contributor Author

wallrj commented Nov 11, 2016

In #2946 (comment) @wallnerryan wrote:

Do we need to do a similar thing for swappiness here?

I've been thinking about it.

  • The intention (I think) was that Flocker would impose a default swappiness of 0 (do not swap)....that's the default in the container API if a swappiness is not supplied.
  • And there's a configuration upgrade script which will set the swappiness of all existing containers to 0.
  • With this change, DockerClient will return Unit(..., swappiness = -1,...) (the value returned by docker inspect when the swappiness has not been defined.
  • That should be ok, since Unit defines swappiness as an integer type....it doesn't enforce postive integers.
  • The deployer will turn that into an Application(..., swappiness = -1,...)... which is also allowed
  • That means that all discovered containers will be found to have a different swappiness value than configured...
  • Which means that all containers will be restarted when the upgraded flocker-container-agent is started!

Which might very well be a problem; ie unscheduled restart and all containers restarting at once may very well overload the server.

I guess I should go back to the drawing board and make the default swappiness None 😢

@wallnerryan
Copy link
Contributor

wallnerryan commented Nov 11, 2016

@wallrj

The intention was for default to be 0 but im not sure that is the best way, that makes cust0 happy but i guess container agent will never be used by anyone but them in the short term.

My vote would not to over complicate it and just set the default to 0, but make it configurable if they want/need it. Swappiness should not be set to -1, but rather 0 be default and placed into the HostConfig for the DockerClient. That way, in thoery, the restart issue should only happen if they containers swappiness is changed out of band, which im not sure if possible, but rather any changes would need to happen in the deployment yaml.

I think strongly forcing 0 by default should work, and since this this is really only impactful in the short term for cust0, I think this may be ok. Thoughts?

@wallrj
Copy link
Contributor Author

wallrj commented Nov 14, 2016

I've suggested (as a work around) changing the memory.swappiness value in the parent dockerd namespace.
I'll see if that is sufficient to address the problem and come back to this if not.

# View the swappiness setting of the docker process namespace
# cat  /sys/fs/cgroup/memory/docker/memory.swappiness
60

# Containers inherit the swappiness from docker namespace
# docker run --rm busybox cat /sys/fs/cgroup/memory/memory.swappiness
60

# Change the docker daemon swappiness
# echo 0 >  /sys/fs/cgroup/memory/docker/memory.swappiness

# Containers now have the new swappiness value by default.
# docker run --rm busybox cat /sys/fs/cgroup/memory/memory.swappiness
0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants