Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[api] support per-isogroup storageclasses #98

Merged
merged 3 commits into from
Mar 6, 2019

Conversation

schallert
Copy link
Collaborator

Users running Kubernetes < 1.12, in which topology-aware volume
scheduling was introduced, cannot use a single storageclass for all
their statefulsets as there is no guarantee that the PV for a pod will
be created in the zone the pod is pinned to. This can cause pods to be
stuck in an unschedulable state.

This allows users to specify a StorageClass per-statefulset (zone). This
solves the above problem as a single StorageClass can be created with
attributes to specify that PVs created with it will be in a given zone
(see
https://v1-11.docs.kubernetes.io/docs/concepts/storage/storage-classes/).

Fixes #92

@m3db m3db deleted a comment from codecov bot Mar 1, 2019
@m3db m3db deleted a comment from codecov bot Mar 1, 2019
@schallert schallert requested review from nerd0 and CK-Ward March 1, 2019 22:42
@schallert schallert marked this pull request as ready for review March 1, 2019 22:42
Users running Kubernetes < 1.12, in which topology-aware volume
scheduling was introduced, cannot use a single storageclass for all
their statefulsets as there is no guarantee that the PV for a pod will
be created in the zone the pod is pinned to. This can cause pods to be
stuck in an unschedulable state.

This allows users to specify a StorageClass per-statefulset (zone). This
solves the above problem as a single StorageClass can be created with
attributes to specify that PVs created with it will be in a given zone
(see
https://v1-11.docs.kubernetes.io/docs/concepts/storage/storage-classes/).
@schallert schallert force-pushed the schallert/storageclass branch from 967f158 to 851a2d8 Compare March 1, 2019 22:44
spec:
image: quay.io/m3db/m3dbnode:latest
replicationFactor: 3
numberOfShards: 4
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: change to 1024 or larger

@@ -188,6 +192,9 @@ func GenerateStatefulSet(
} else {
template := cluster.Spec.DataDirVolumeClaimTemplate.DeepCopy()
template.ObjectMeta.Name = _dataVolumeName
if sc := isolationGroup.StorageClassName; sc != "" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there anyway to ensure that there is a storage class of that name within the template itself? Do that need to validated?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for now it's better to let Kube throw the error when it tries to create the statefulset and doesn't find it, otherwise we'd have to add new RBAC permissions to read StorageClasses

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair, I wanted to ensure that it would be clear to user that a storage class doesn't exist.

@schallert schallert merged commit 5eb9f37 into master Mar 6, 2019
@schallert schallert deleted the schallert/storageclass branch March 6, 2019 15:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants