From cb858793afc6d65c1fb08c58d53ff7b364a238c1 Mon Sep 17 00:00:00 2001 From: Benjamin Raskin Date: Mon, 16 Nov 2020 11:17:21 -0500 Subject: [PATCH] [docs] Small changes to website/docs (#2900) --- site/content/how_to/other/tsdb.md | 4 ++-- site/content/how_to/use_as_tsdb.md | 4 ++-- site/content/m3db/architecture/commitlogs.md | 2 +- site/content/operational_guide/namespace_configuration.md | 8 ++++---- site/static/about/index.html | 2 +- site/static/index.html | 8 ++++---- 6 files changed, 14 insertions(+), 14 deletions(-) diff --git a/site/content/how_to/other/tsdb.md b/site/content/how_to/other/tsdb.md index 0ca766b2e1..b02b93d4bf 100644 --- a/site/content/how_to/other/tsdb.md +++ b/site/content/how_to/other/tsdb.md @@ -50,8 +50,8 @@ This would allow users to issue queries that answer questions like: TODO(rartoul): Discuss the ability to perform limited amounts of aggregation queries here as well. TODO(rartoul): Discuss ID / tags mutability. -#### Data Points -Each time series in M3DB stores data as a stream of data points in the form of tuples. Timestamp resolution can be as granular as individual nanoseconds. +#### Datapoints +Each time series in M3DB stores data as a stream of datapoints in the form of tuples. Timestamp resolution can be as granular as individual nanoseconds. The value portion of the tuple is a Protobuf message that matches the configured namespace schema, which requires that all values in the current time series must also match this schema. This limitation may be lifted in the future. #### Schema Modeling diff --git a/site/content/how_to/use_as_tsdb.md b/site/content/how_to/use_as_tsdb.md index 53a5da02be..7de53aa09f 100644 --- a/site/content/how_to/use_as_tsdb.md +++ b/site/content/how_to/use_as_tsdb.md @@ -39,9 +39,9 @@ TODO(rartoul): Discuss the ability to perform limited amounts of aggregation que TODO(rartoul): Discuss ID / tags mutability. -### Data Points +### Datapoints -Each time series in M3DB stores data as a stream of data points in the form of `` tuples. Timestamp resolution can be as granular as individual nanoseconds. +Each time series in M3DB stores data as a stream of datapoints in the form of `` tuples. Timestamp resolution can be as granular as individual nanoseconds. The `value` portion of the tuple is a Protobuf message that matches the configured namespace schema, which requires that all values in the current time series must also match this schema. This limitation may be lifted in the future. diff --git a/site/content/m3db/architecture/commitlogs.md b/site/content/m3db/architecture/commitlogs.md index a7ba4c068a..c100ed9315 100644 --- a/site/content/m3db/architecture/commitlogs.md +++ b/site/content/m3db/architecture/commitlogs.md @@ -55,7 +55,7 @@ CommitLogMetadata { Commit log files are compacted via the snapshotting proccess which (if enabled at the namespace level) will snapshot all data in memory into compressed files which have the same structure as the [fileset files](/docs/m3db/architecture/storage) but are stored in a different location. Once these snapshot files are created, then all the commit log files whose data are captured by the snapshot files can be deleted. This can result in significant disk savings for M3DB nodes running with large block sizes and high write volume where the size of the (uncompressed) commit logs can quickly get out of hand. -In addition, since the snapshot files are already compressed, bootstrapping from them is much faster than bootstrapping from raw commit log files because the individual data points don't need to be decoded and then M3TSZ encoded. The M3DB node just needs to read the raw bytes off disk and load them into memory. +In addition, since the snapshot files are already compressed, bootstrapping from them is much faster than bootstrapping from raw commit log files because the individual datapoints don't need to be decoded and then M3TSZ encoded. The M3DB node just needs to read the raw bytes off disk and load them into memory. ### Cleanup diff --git a/site/content/operational_guide/namespace_configuration.md b/site/content/operational_guide/namespace_configuration.md index 7d49fed8f5..3e84abd1e2 100644 --- a/site/content/operational_guide/namespace_configuration.md +++ b/site/content/operational_guide/namespace_configuration.md @@ -201,15 +201,15 @@ Should match the databases [blocksize](#blocksize) for optimal memory usage. Can be modified without creating a new namespace: `no` ### aggregationOptions -Options for the Coordinator to use to make decisions around how to aggregate data points. +Options for the Coordinator to use to make decisions around how to aggregate datapoints. Can be modified without creating a new namespace: `yes` #### aggregations -One or more set of instructions on how data points should be aggregated within the namespace. +One or more set of instructions on how datapoints should be aggregated within the namespace. ##### aggregated -Whether data points are aggregated. +Whether datapoints are aggregated. ##### attributes If aggregated is true, specifies how to aggregate data. @@ -221,4 +221,4 @@ The time range to aggregate data across. Options related to downsampling data ###### _all_ -Whether to send data points to this namespace. If false, the coordinator will not auto-aggregate incoming data points and data points must be sent the namespace via rules. Defaults to true. \ No newline at end of file +Whether to send datapoints to this namespace. If false, the coordinator will not auto-aggregate incoming datapoints and datapoints must be sent the namespace via rules. Defaults to true. \ No newline at end of file diff --git a/site/static/about/index.html b/site/static/about/index.html index ce6d8255f8..72de081cdd 100644 --- a/site/static/about/index.html +++ b/site/static/about/index.html @@ -154,7 +154,7 @@

Ingestion & streaming aggregation

-

Ingest and streaming aggregation of metrics with dynamic configuration

+

Ingest and aggregate a stream of metrics with dynamic configuration

diff --git a/site/static/index.html b/site/static/index.html index f2094d82c9..903a498c19 100644 --- a/site/static/index.html +++ b/site/static/index.html @@ -181,7 +181,7 @@

- "At a scale of 1.5 million data points ingested per second, it started getting very expensive to monitor our metrics and we had to turn down our replication factor (RF) to 2 on Cassandra. With M3DB, we were able to bring RF back to 3 while also cutting down significantly on hardware / storage costs." + "At a scale of 1.5 million datapoints ingested per second, it started getting very expensive to monitor our metrics and we had to turn down our replication factor (RF) to 2 on Cassandra. With M3DB, we were able to bring RF back to 3 while also cutting down significantly on hardware / storage costs."

person-Prateek @@ -355,7 +355,7 @@

- "When querying millions or billions of metrics, you want something flexible and sublinear in speed as the query’s become longer and longer the more distinct values you have. This led us to the creation of M3DB’s inverted index." + "When querying millions or billions of metrics, you want something flexible and sublinear in speed as the queries become longer and longer the more distinct values you have. This led us to the creation of M3DB’s inverted index."

person-rob_skillington @@ -364,7 +364,7 @@

Rob Skillington

- CTO and Co-Founder of Chraonosphere, Former Tech Lead at Uber + CTO and Co-Founder of Chronosphere, Former Tech Lead at Uber

@@ -485,7 +485,7 @@

- "At a scale of 1.5 million data points ingested per second, it started getting very expensive to monitor our metrics and we had to turn down our replication factor (RF) to 2 on Cassandra. With M3DB, we were able to bring RF back to 3 while also cutting down significantly on hardware / storage costs." + "At a scale of 1.5 million datapoints ingested per second, it started getting very expensive to monitor our metrics and we had to turn down our replication factor (RF) to 2 on Cassandra. With M3DB, we were able to bring RF back to 3 while also cutting down significantly on hardware / storage costs."

person-/Prateek