Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add shm_volume option #427

Merged
merged 9 commits into from
Dec 21, 2018
Merged

Add shm_volume option #427

merged 9 commits into from
Dec 21, 2018

Conversation

erthalion
Copy link
Contributor

Add possibility to mount an extra memory volume to increase available shm. Fixes #426

Add possibility to mount an extra memory volume to increase available
shm.
@coveralls
Copy link

coveralls commented Dec 8, 2018

Coverage Status

Coverage remained the same at 23.766% when pulling c2fb3b4 on feature/shm-size into ff5c63d on master.

@@ -420,6 +421,10 @@ func generatePodTemplate(
Tolerations: *tolerationsSpec,
}

if shmVolume == true {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should omit comparison to bool constant, can be simplified to shmVolume (from megacheck)

@erthalion erthalion changed the title [WIP] Add shm_size option [Add shm_volume option Dec 19, 2018
@erthalion erthalion changed the title [Add shm_volume option Add shm_volume option Dec 20, 2018
@@ -97,6 +97,12 @@ Those are parameters grouped directly under the `spec` key in the manifest.
is taken from the `pod_priority_class_name` operator parameter, if not set
then the default priority class is taken. The priority class itself must be defined in advance.

* **shmVolume**
Copy link
Member

@sdudoladov sdudoladov Dec 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. @Jan-M just to double check - the original issue description talked about modifying the configmap, not only cluster manifests. Can it be the case that we will need to enable this feature for a bunch of PG clusters, and then it will be easier to enable it in the operator and let it take care of the rest ? IMHO most clusters will no need this feature, and we can leave it on the level of the cluster manifest as it is currently implemented

  2. is the intention of this feature to mount a tmpfs volume at /dev/shm without restricting the volume's size ?

  3. shmVolume: true is probably not the most descriptive name for that option. Sth like increasePGSharedMemory or disableSharedMemoryLimit sounds imo better cause it reflects the purpose of the feature.

Copy link
Contributor Author

@erthalion erthalion Dec 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the intention of this feature to mount a tmpfs volume at /dev/shm without restricting the volume's size ?

That was my initial idea to set SizeLimit, but it turns out that K8S doesn't support it yet:

kubernetes/kubernetes#43823
kubernetes/kubernetes#63126


ShmVolumeName = "dshm"
ShmVolumePath = "/dev/shm"
MinShmSize = 64 * 1024 * 1024
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is MinShmSize used ? and the value is in bytes, isn't it ?

Copy link
Contributor Author

@erthalion erthalion Dec 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's outdated, will remove

Start a database pod without limitations on shm memory. By default docker
limit `/dev/shm` to `64M`, which could be not enough if PostgreSQL uses
parallel workers heavily. If this option is enabled, to the target database
pod will be mounted a new tmpfs volume to remove this limitation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. default value ?
  2. related docker-library issue

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reference doesn't bring any value as for me.

},
})

mounts := append(podSpec.Containers[0].VolumeMounts,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Containers[0] == postgres container . worth a comment/name since not everyone is familiar with this implicit assumption.

@@ -51,7 +51,7 @@ type PostgresSpec struct {
Tolerations []v1.Toleration `json:"tolerations,omitempty"`
Sidecars []Sidecar `json:"sidecars,omitempty"`
PodPriorityClassName string `json:"pod_priority_class_name,omitempty"`
ShmVolume bool `json:"shmVolume,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we make this name nicer?

enableSharedMemoryVolume?

@@ -38,6 +38,7 @@ type Resources struct {
NodeReadinessLabel map[string]string `name:"node_readiness_label" default:""`
MaxInstances int32 `name:"max_instances" default:"-1"`
MinInstances int32 `name:"min_instances" default:"-1"`
ShmVolume bool `name:"shm_volume" default:"true"`
Copy link
Member

@Jan-M Jan-M Dec 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thing we use "enable" prefix in other config options?

enableSharedMemoryVolume ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I actually disagree. The data type itself for this value tells that it's enable/disable option, so add it to the name would be redundant.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so the agreement is to rename it since indeed we have pretty many options following the same pattern.

@erthalion erthalion merged commit d6e6b00 into master Dec 21, 2018
@erthalion erthalion deleted the feature/shm-size branch December 21, 2018 15:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants