From fb94d9953496ed53deedd04b2dfbb77d7b969d92 Mon Sep 17 00:00:00 2001 From: Richard Artoul Date: Fri, 28 Sep 2018 11:26:12 -0400 Subject: [PATCH 1/4] Update bootstrapping operational guide --- docs/operational_guide/bootstrapping.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/operational_guide/bootstrapping.md b/docs/operational_guide/bootstrapping.md index dbf690613e..d275a197ec 100644 --- a/docs/operational_guide/bootstrapping.md +++ b/docs/operational_guide/bootstrapping.md @@ -71,7 +71,7 @@ In this case, the `peers` bootstrapper running on node A will not be able to ful └─────────────────────────┘ └───────────────────────┘ └──────────────────────┘ ``` -Note that a bootstrap consistency level of majority is the default value, but can be modified by changing the value of the key "m3db.client.bootstrap-consistency-level" in [etcd](https://coreos.com/etcd/) to one of: "none", "one", "unstrict_majority" (attempt to read from majority, but settle for less if any errors occur), "majority" (strict majority), and "all". For example, if an entire cluster with a replication factor of 3 was restarted simultaneously, all the nodes would get stuck in an infinite loop trying to peer bootstrap from each other and not achieving majority until an operator modified this value. +Note that a bootstrap consistency level of majority is the default value, but can be modified by changing the value of the key `m3db.client.bootstrap-consistency-level` in [etcd](https://coreos.com/etcd/) to one of: `none`, `one`, `unstrict_majority` (attempt to read from majority, but settle for less if any errors occur), `majority` (strict majority), and `all`. For example, if an entire cluster with a replication factor of 3 was restarted simultaneously, all the nodes would get stuck in an infinite loop trying to peer bootstrap from each other and not achieving majority until an operator modified this value. Note that this can happen even if all the shards were in the `Available` state because M3DB nodes will reject all read requests for a shard until they have bootstrapped that shard (which has to happen everytime the node is restarted.) **Note**: Any bootstrappers configuration that does not include the `peers` bootstrapper will be unable to handle dynamic placement changes of any kind. From d9886331463747cdebb991beeca1c95cdbda0317 Mon Sep 17 00:00:00 2001 From: Richard Artoul Date: Fri, 28 Sep 2018 11:30:51 -0400 Subject: [PATCH 2/4] add block quote --- docs/operational_guide/bootstrapping.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/operational_guide/bootstrapping.md b/docs/operational_guide/bootstrapping.md index d275a197ec..5bfc93793a 100644 --- a/docs/operational_guide/bootstrapping.md +++ b/docs/operational_guide/bootstrapping.md @@ -71,7 +71,7 @@ In this case, the `peers` bootstrapper running on node A will not be able to ful └─────────────────────────┘ └───────────────────────┘ └──────────────────────┘ ``` -Note that a bootstrap consistency level of majority is the default value, but can be modified by changing the value of the key `m3db.client.bootstrap-consistency-level` in [etcd](https://coreos.com/etcd/) to one of: `none`, `one`, `unstrict_majority` (attempt to read from majority, but settle for less if any errors occur), `majority` (strict majority), and `all`. For example, if an entire cluster with a replication factor of 3 was restarted simultaneously, all the nodes would get stuck in an infinite loop trying to peer bootstrap from each other and not achieving majority until an operator modified this value. Note that this can happen even if all the shards were in the `Available` state because M3DB nodes will reject all read requests for a shard until they have bootstrapped that shard (which has to happen everytime the node is restarted.) +Note that a bootstrap consistency level of `majority` is the default value, but can be modified by changing the value of the key `m3db.client.bootstrap-consistency-level` in [etcd](https://coreos.com/etcd/) to one of: `none`, `one`, `unstrict_majority` (attempt to read from majority, but settle for less if any errors occur), `majority` (strict majority), and `all`. For example, if an entire cluster with a replication factor of 3 was restarted simultaneously, all the nodes would get stuck in an infinite loop trying to peer bootstrap from each other and not achieving majority until an operator modified this value. Note that this can happen even if all the shards were in the `Available` state because M3DB nodes will reject all read requests for a shard until they have bootstrapped that shard (which has to happen everytime the node is restarted.) **Note**: Any bootstrappers configuration that does not include the `peers` bootstrapper will be unable to handle dynamic placement changes of any kind. From 06c183dd1a94078974245ee2d7dc9094f71323c4 Mon Sep 17 00:00:00 2001 From: Richard Artoul Date: Fri, 28 Sep 2018 11:31:17 -0400 Subject: [PATCH 3/4] remove period --- docs/operational_guide/bootstrapping.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/operational_guide/bootstrapping.md b/docs/operational_guide/bootstrapping.md index 5bfc93793a..58019f4f94 100644 --- a/docs/operational_guide/bootstrapping.md +++ b/docs/operational_guide/bootstrapping.md @@ -71,7 +71,7 @@ In this case, the `peers` bootstrapper running on node A will not be able to ful └─────────────────────────┘ └───────────────────────┘ └──────────────────────┘ ``` -Note that a bootstrap consistency level of `majority` is the default value, but can be modified by changing the value of the key `m3db.client.bootstrap-consistency-level` in [etcd](https://coreos.com/etcd/) to one of: `none`, `one`, `unstrict_majority` (attempt to read from majority, but settle for less if any errors occur), `majority` (strict majority), and `all`. For example, if an entire cluster with a replication factor of 3 was restarted simultaneously, all the nodes would get stuck in an infinite loop trying to peer bootstrap from each other and not achieving majority until an operator modified this value. Note that this can happen even if all the shards were in the `Available` state because M3DB nodes will reject all read requests for a shard until they have bootstrapped that shard (which has to happen everytime the node is restarted.) +Note that a bootstrap consistency level of `majority` is the default value, but can be modified by changing the value of the key `m3db.client.bootstrap-consistency-level` in [etcd](https://coreos.com/etcd/) to one of: `none`, `one`, `unstrict_majority` (attempt to read from majority, but settle for less if any errors occur), `majority` (strict majority), and `all`. For example, if an entire cluster with a replication factor of 3 was restarted simultaneously, all the nodes would get stuck in an infinite loop trying to peer bootstrap from each other and not achieving majority until an operator modified this value. Note that this can happen even if all the shards were in the `Available` state because M3DB nodes will reject all read requests for a shard until they have bootstrapped that shard (which has to happen everytime the node is restarted) **Note**: Any bootstrappers configuration that does not include the `peers` bootstrapper will be unable to handle dynamic placement changes of any kind. From b1cc6119ad902f249b88e7d9c0d33910664e29d1 Mon Sep 17 00:00:00 2001 From: Richard Artoul Date: Fri, 28 Sep 2018 11:34:46 -0400 Subject: [PATCH 4/4] Add period to end of sentence --- docs/operational_guide/bootstrapping.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/operational_guide/bootstrapping.md b/docs/operational_guide/bootstrapping.md index 58019f4f94..7e138fb05a 100644 --- a/docs/operational_guide/bootstrapping.md +++ b/docs/operational_guide/bootstrapping.md @@ -71,7 +71,7 @@ In this case, the `peers` bootstrapper running on node A will not be able to ful └─────────────────────────┘ └───────────────────────┘ └──────────────────────┘ ``` -Note that a bootstrap consistency level of `majority` is the default value, but can be modified by changing the value of the key `m3db.client.bootstrap-consistency-level` in [etcd](https://coreos.com/etcd/) to one of: `none`, `one`, `unstrict_majority` (attempt to read from majority, but settle for less if any errors occur), `majority` (strict majority), and `all`. For example, if an entire cluster with a replication factor of 3 was restarted simultaneously, all the nodes would get stuck in an infinite loop trying to peer bootstrap from each other and not achieving majority until an operator modified this value. Note that this can happen even if all the shards were in the `Available` state because M3DB nodes will reject all read requests for a shard until they have bootstrapped that shard (which has to happen everytime the node is restarted) +Note that a bootstrap consistency level of `majority` is the default value, but can be modified by changing the value of the key `m3db.client.bootstrap-consistency-level` in [etcd](https://coreos.com/etcd/) to one of: `none`, `one`, `unstrict_majority` (attempt to read from majority, but settle for less if any errors occur), `majority` (strict majority), and `all`. For example, if an entire cluster with a replication factor of 3 was restarted simultaneously, all the nodes would get stuck in an infinite loop trying to peer bootstrap from each other and not achieving majority until an operator modified this value. Note that this can happen even if all the shards were in the `Available` state because M3DB nodes will reject all read requests for a shard until they have bootstrapped that shard (which has to happen everytime the node is restarted). **Note**: Any bootstrappers configuration that does not include the `peers` bootstrapper will be unable to handle dynamic placement changes of any kind.