-
-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Sharding Initialization with Dynamic Replica Set Configuration #1087
Conversation
WalkthroughThe changes in this pull request modify a Kubernetes ConfigMap template for a MongoDB sharded cluster setup. The updates introduce dynamic initialization for replica sets of both the config server and shard servers, replacing static commands with loops that generate member definitions based on configuration values. This allows for a flexible number of replicas defined by the user. The overall control flow remains the same, maintaining readiness checks for all components during the setup process. Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (1)
build/charts/yorkie-cluster/charts/yorkie-mongodb/templates/sharded/configmap.yaml (1)
Line range hint
16-24
: Enhance initialization robustnessThe current waitUntilReady function could be improved to handle initialization failures more gracefully and provide better validation:
Consider enhancing the initialization process with these improvements:
function waitUntilReady { + local max_attempts=30 + local attempt=1 + local wait_time=5 while true; do + if [ $attempt -gt $max_attempts ]; then + echo "Failed to initialize after $max_attempts attempts" + exit 1 + fi mongosh $1 --quiet --eval "db.runCommand('ping').ok" if [ $? -eq 0 ]; then + # Validate replica set status + if [[ $1 == *"configsvr"* || $1 == *"shardsvr"* ]]; then + replStatus=$(mongosh $1 --quiet --eval "rs.status().ok") + if [ "$replStatus" == "1" ]; then break + fi + else + break + fi fi - sleep 5 + wait_time=$(( wait_time * 2 )) + if [ $wait_time -gt 60 ]; then + wait_time=60 + fi + echo "Attempt $attempt of $max_attempts failed, waiting ${wait_time}s..." + sleep $wait_time + attempt=$(( attempt + 1 )) done }
mongosh $shardsvrAddr --eval 'rs.initiate({"_id":"{{ include "shardReplName" (list $.Values.name $i) }}", "members":[ | ||
{{- range $j, $e := until ($.Values.sharded.replicaCount.shardsvr | int) }} | ||
{"_id":{{ printf "%d" $j }},"host":"{{ printf "%s" (include "shardReplAddr" (list $.Values.name $i $j $domainSuffix)) }}",{{- if eq $j 0 }}"priority":5{{- end }} } | ||
{{- end }} | ||
]})' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance shard server replica set configuration
The shard server replica set initialization has similar issues to the config server, plus some shard-specific concerns:
- Priority value of 5 is too high
- Missing vote configurations
- No zone/tag awareness for data locality
- No handling of even number of replicas
Consider applying this improved configuration:
mongosh $shardsvrAddr --eval 'rs.initiate({"_id":"{{ include "shardReplName" (list $.Values.name $i) }}", "members":[
{{- range $j, $e := until ($.Values.sharded.replicaCount.shardsvr | int) }}
- {"_id":{{ printf "%d" $j }},"host":"{{ printf "%s" (include "shardReplAddr" (list $.Values.name $i $j $domainSuffix)) }}",{{- if eq $j 0 }}"priority":5{{- end }} }
+ {"_id":{{ printf "%d" $j }},"host":"{{ printf "%s" (include "shardReplAddr" (list $.Values.name $i $j $domainSuffix)) }}","votes":1,
+ {{- if eq $j 0 }}
+ "priority":1,
+ "tags": {"zone": "{{ $.Values.sharded.zones.primary | default "primary" }}"}
+ {{- else }}
+ "priority":0,
+ "tags": {"zone": "{{ $.Values.sharded.zones.secondary | default "secondary" }}"}
+ {{- end }} }
{{- end }}
+ {{- if eq ($.Values.sharded.replicaCount.shardsvr | int | mod 2) 0 }}
+ ,{"_id":{{ $.Values.sharded.replicaCount.shardsvr | int }},"host":"{{ printf "%s" (include "shardReplAddr" (list $.Values.name $i ($.Values.sharded.replicaCount.shardsvr | int) $domainSuffix)) }}","arbiterOnly":true}
+ {{- end }}
]})'
Consider adding the following to your values.yaml to support zone awareness:
sharded:
zones:
primary: "zone1"
secondary: "zone2"
mongosh $configsvrAddr --eval 'rs.initiate({"_id":"{{ include "configReplName" (list $.Values.name) }}", "members":[ | ||
{{- range $i, $e := until ($.Values.sharded.replicaCount.configsvr | int) }} | ||
{"_id":{{ printf "%d" $i }},"host":"{{ printf "%s" (include "configReplAddr" (list $.Values.name $i $domainSuffix)) }}",{{- if eq $i 0 }}"priority":5{{- end }} } | ||
{{- end }} | ||
]})' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Review replica set configuration parameters
The config server replica set initialization has several areas for improvement:
- The priority value of 5 is unusually high. MongoDB typically uses priority values between 0 and 1.
- The configuration lacks vote settings which are important for election mechanics.
- There's no handling for even number of replicas which could lead to split-brain scenarios.
Consider applying this improved configuration:
mongosh $configsvrAddr --eval 'rs.initiate({"_id":"{{ include "configReplName" (list $.Values.name) }}", "members":[
{{- range $i, $e := until ($.Values.sharded.replicaCount.configsvr | int) }}
- {"_id":{{ printf "%d" $i }},"host":"{{ printf "%s" (include "configReplAddr" (list $.Values.name $i $domainSuffix)) }}",{{- if eq $i 0 }}"priority":5{{- end }} }
+ {"_id":{{ printf "%d" $i }},"host":"{{ printf "%s" (include "configReplAddr" (list $.Values.name $i $domainSuffix)) }}","votes":1,{{- if eq $i 0 }}"priority":1{{- else }}"priority":0{{- end }} }
{{- end }}
+ {{- if eq ($.Values.sharded.replicaCount.configsvr | int | mod 2) 0 }}
+ ,{"_id":{{ $.Values.sharded.replicaCount.configsvr | int }},"host":"{{ printf "%s" (include "configReplAddr" (list $.Values.name ($.Values.sharded.replicaCount.configsvr | int) $domainSuffix)) }}","arbiterOnly":true}
+ {{- end }}
]})'
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
mongosh $configsvrAddr --eval 'rs.initiate({"_id":"{{ include "configReplName" (list $.Values.name) }}", "members":[ | |
{{- range $i, $e := until ($.Values.sharded.replicaCount.configsvr | int) }} | |
{"_id":{{ printf "%d" $i }},"host":"{{ printf "%s" (include "configReplAddr" (list $.Values.name $i $domainSuffix)) }}",{{- if eq $i 0 }}"priority":5{{- end }} } | |
{{- end }} | |
]})' | |
mongosh $configsvrAddr --eval 'rs.initiate({"_id":"{{ include "configReplName" (list $.Values.name) }}", "members":[ | |
{{- range $i, $e := until ($.Values.sharded.replicaCount.configsvr | int) }} | |
{"_id":{{ printf "%d" $i }},"host":"{{ printf "%s" (include "configReplAddr" (list $.Values.name $i $domainSuffix)) }}","votes":1,{{- if eq $i 0 }}"priority":1{{- else }}"priority":0{{- end }} } | |
{{- end }} | |
{{- if eq ($.Values.sharded.replicaCount.configsvr | int | mod 2) 0 }} | |
,{"_id":{{ $.Values.sharded.replicaCount.configsvr | int }},"host":"{{ printf "%s" (include "configReplAddr" (list $.Values.name ($.Values.sharded.replicaCount.configsvr | int) $domainSuffix)) }}","arbiterOnly":true} | |
{{- end }} | |
]})' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution.
What this PR does / why we need it:
When you change the settings to use mongodb sharding, the configmap changes like below.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation:
Checklist:
Summary by CodeRabbit
New Features
Bug Fixes