From cbc0c37cc798708364993803e371758d6db19502 Mon Sep 17 00:00:00 2001 From: Carlos Neto Date: Thu, 20 Jul 2023 22:02:14 -0300 Subject: [PATCH] * [DOC] Added 'Search' nodes description in 'Creating a cluster' page * [DOC] fix the tabulation of searchable snapshots snippet example Signed-off-by: Carlos Neto --- .../snapshots/searchable_snapshot.md | 10 +++++----- _tuning-your-cluster/index.md | 1 + 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md b/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md index 0ddeae20d36..9d708a72b78 100644 --- a/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md +++ b/_tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot.md @@ -18,14 +18,14 @@ The searchable snapshot feature incorporates techniques like caching frequently To configure the searchable snapshots feature, create a node in your opensearch.yml file and define the node role as `search`: - ```bash - node.name: snapshots-node - node.roles: [ search ] - ``` +```yaml +node.name: snapshots-node +node.roles: [ search ] +``` If you're running Docker, you can create a node with the `search` node role by adding the line `- node.roles=search` to your `docker-compose.yml` file: -```bash +```yaml version: '3' services: opensearch-node1: diff --git a/_tuning-your-cluster/index.md b/_tuning-your-cluster/index.md index 8172cfff18f..07e13f67307 100644 --- a/_tuning-your-cluster/index.md +++ b/_tuning-your-cluster/index.md @@ -34,6 +34,7 @@ Cluster manager eligible | Elects one node among them as the cluster manager nod Data | Stores and searches data. Performs all data-related operations (indexing, searching, aggregating) on local shards. These are the worker nodes of your cluster and need more disk space than any other node type. | As you add data nodes, keep them balanced between zones. For example, if you have three zones, add data nodes in multiples of three, one for each zone. We recommend using storage and RAM-heavy nodes. Ingest | Pre-processes data before storing it in the cluster. Runs an ingest pipeline that transforms your data before adding it to an index. | If you plan to ingest a lot of data and run complex ingest pipelines, we recommend you use dedicated ingest nodes. You can also optionally offload your indexing from the data nodes so that your data nodes are used exclusively for searching and aggregating. Coordinating | Delegates client requests to the shards on the data nodes, collects and aggregates the results into one final result, and sends this result back to the client. | A couple of dedicated coordinating-only nodes is appropriate to prevent bottlenecks for search-heavy workloads. We recommend using CPUs with as many cores as you can. +Search | Provides access to [Searchable Snapshots]({{site.url}}{{site.baseurl}}/tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot/). Incorporates techniques like caching frequently used data segments and removing the least used to access index `remote_snapshot` (stored in remote long-term storage source, for example AWS S3 or GCS). | The index allocated in Search nodes is a cache to Snapshots. For this reason, we recommend dedicated nodes with a setup that privileges more computing (CPUs and Memory) than large space storage (Hard Disk). Dynamic | Delegates a specific node for custom work, such as machine learning (ML) tasks, preventing the consumption of resources from data nodes and therefore not affecting any OpenSearch functionality. By default, each node is a cluster-manager-eligible, data, ingest, and coordinating node. Deciding on the number of nodes, assigning node types, and choosing the hardware for each node type depends on your use case. You must take into account factors like the amount of time you want to hold on to your data, the average size of your documents, your typical workload (indexing, searches, aggregations), your expected price-performance ratio, your risk tolerance, and so on.