diff --git a/docs/api/saved-objects/resolve_import_errors.asciidoc b/docs/api/saved-objects/resolve_import_errors.asciidoc index 7a57e03875e35..162e9589e4f9e 100644 --- a/docs/api/saved-objects/resolve_import_errors.asciidoc +++ b/docs/api/saved-objects/resolve_import_errors.asciidoc @@ -25,7 +25,7 @@ To resolve errors, you can: ==== Path parameters `space_id`:: - (Optional, string) An identifier for the <>. When `space_id` is unspecfied in the URL, the default space is used. + (Optional, string) An identifier for the <>. When `space_id` is unspecified in the URL, the default space is used. [[saved-objects-api-resolve-import-errors-query-params]] ==== Query parameters diff --git a/docs/api/spaces-management/resolve_copy_saved_objects_conflicts.asciidoc b/docs/api/spaces-management/resolve_copy_saved_objects_conflicts.asciidoc index d79df2c085b19..9d26f9656d3f6 100644 --- a/docs/api/spaces-management/resolve_copy_saved_objects_conflicts.asciidoc +++ b/docs/api/spaces-management/resolve_copy_saved_objects_conflicts.asciidoc @@ -68,7 +68,7 @@ Execute the <>, w `id`:::: (Required, string) The saved object ID. `overwrite`:::: - (Required, boolean) When set to `true`, the saved object from the source space (desigated by the <>) overwrites the conflicting object in the destination space. When set to `false`, this does nothing. + (Required, boolean) When set to `true`, the saved object from the source space (designated by the <>) overwrites the conflicting object in the destination space. When set to `false`, this does nothing. `destinationId`:::: (Optional, string) Specifies the destination ID that the copied object should have, if different from the current ID. `ignoreMissingReferences`::: diff --git a/docs/api/upgrade-assistant/default-field.asciidoc b/docs/api/upgrade-assistant/default-field.asciidoc index 8bdcd359d5668..bbe44d894963b 100644 --- a/docs/api/upgrade-assistant/default-field.asciidoc +++ b/docs/api/upgrade-assistant/default-field.asciidoc @@ -26,7 +26,7 @@ GET /api/upgrade_assistant/add_query_default_field/myIndex // KIBANA <1> A required array of {es} field types that generate the list of fields. -<2> An optional array of additional field names, dot-deliminated. +<2> An optional array of additional field names, dot-delimited. To add the `index.query.default_field` index setting to the specified index, {kib} generates an array of all fields from the index mapping. The fields contain the types specified in `fieldTypes`. {kib} appends any other fields specified in `otherFields` to the array of default fields. diff --git a/docs/apm/service-maps.asciidoc b/docs/apm/service-maps.asciidoc index f76b9976dd1d2..8a2beef22b6bd 100644 --- a/docs/apm/service-maps.asciidoc +++ b/docs/apm/service-maps.asciidoc @@ -84,7 +84,7 @@ image:apm/images/red-service.png[APM red service]:: Max anomaly score **≥75**. [role="screenshot"] image::apm/images/apm-service-map-anomaly.png[Example view of anomaly scores on service maps in the APM app] -If an anomaly has been detected, click *view anomalies* to view the anomaly detection metric viewier in the Machine learning app. +If an anomaly has been detected, click *view anomalies* to view the anomaly detection metric viewer in the Machine learning app. This time series analysis will display additional details on the severity and time of the detected anomalies. To learn how to create a machine learning job, see <>. diff --git a/docs/developer/architecture/kibana-platform-plugin-api.asciidoc b/docs/developer/architecture/kibana-platform-plugin-api.asciidoc index 2005a90bb87bb..9cf60cda76f75 100644 --- a/docs/developer/architecture/kibana-platform-plugin-api.asciidoc +++ b/docs/developer/architecture/kibana-platform-plugin-api.asciidoc @@ -221,7 +221,7 @@ These are the contracts exposed by the core services for each lifecycle: [cols=",,",options="header",] |=== |lifecycle |server contract|browser contract -|_contructor_ +|_constructor_ |{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.plugininitializercontext.md[PluginInitializerContext] |{kib-repo}blob/{branch}/docs/development/core/public/kibana-plugin-core-public.plugininitializercontext.md[PluginInitializerContext] diff --git a/docs/developer/best-practices/typescript.asciidoc b/docs/developer/best-practices/typescript.asciidoc index 2631ee717c3d5..92b6818a09865 100644 --- a/docs/developer/best-practices/typescript.asciidoc +++ b/docs/developer/best-practices/typescript.asciidoc @@ -51,7 +51,7 @@ Additionally, in order to migrate into project refs, you also need to make sure ], "references": [ { "path": "../../core/tsconfig.json" }, - // add references to other TypeScript projects your plugin dependes on + // add references to other TypeScript projects your plugin depends on ] } ---- diff --git a/docs/developer/contributing/development-ci-metrics.asciidoc b/docs/developer/contributing/development-ci-metrics.asciidoc index 3a133e64ea528..2905bd72a501f 100644 --- a/docs/developer/contributing/development-ci-metrics.asciidoc +++ b/docs/developer/contributing/development-ci-metrics.asciidoc @@ -137,4 +137,4 @@ If you only want to run the build once you can run: node scripts/build_kibana_platform_plugins --validate-limits --focus {pluginId} ----------- -This command needs to apply production optimizations to get the right sizes, which means that the optimizer will take significantly longer to run and on most developmer machines will consume all of your machines resources for 20 minutes or more. If you'd like to multi-task while this is running you might need to limit the number of workers using the `--max-workers` flag. \ No newline at end of file +This command needs to apply production optimizations to get the right sizes, which means that the optimizer will take significantly longer to run and on most developer machines will consume all of your machines resources for 20 minutes or more. If you'd like to multi-task while this is running you might need to limit the number of workers using the `--max-workers` flag. \ No newline at end of file diff --git a/docs/developer/contributing/development-documentation.asciidoc b/docs/developer/contributing/development-documentation.asciidoc index 7137d5bad051c..801d0527cc2b7 100644 --- a/docs/developer/contributing/development-documentation.asciidoc +++ b/docs/developer/contributing/development-documentation.asciidoc @@ -31,7 +31,7 @@ node scripts/docs.js --open REST APIs should be documented using the following recommended formats: -* https://raw.githubusercontent.com/elastic/docs/master/shared/api-ref-ex.asciidoc[API doc templaate] +* https://raw.githubusercontent.com/elastic/docs/master/shared/api-ref-ex.asciidoc[API doc template] * https://raw.githubusercontent.com/elastic/docs/master/shared/api-definitions-ex.asciidoc[API object definition template] [discrete] diff --git a/docs/developer/contributing/interpreting-ci-failures.asciidoc b/docs/developer/contributing/interpreting-ci-failures.asciidoc index ffbe448d79a44..eead720f03c60 100644 --- a/docs/developer/contributing/interpreting-ci-failures.asciidoc +++ b/docs/developer/contributing/interpreting-ci-failures.asciidoc @@ -22,7 +22,7 @@ image::images/job_view.png[Jenkins job view showing a test failure] 1. *Git Changes:* the list of commits that were in this build which weren't in the previous build. For Pull Requests this list is calculated by comparing against the most recent Pull Request which was tested, it is not limited to build for this specific Pull Request, so it's not very useful. 2. *Test Results:* A link to the test results screen, and shortcuts to the failed tests. Functional tests capture and store the log output from each specific test, and make it visible at these links. For other test runners only the error message is visible and log output must be tracked down in the *Pipeline Steps*. 3. *Google Cloud Storage (GCS) Upload Report:* Link to the screen which lists out the artifacts uploaded to GCS during this job execution. -4. *Pipeline Steps:*: A breakdown of the pipline that was executed, along with individual log output for each step in the pipeline. +4. *Pipeline Steps:*: A breakdown of the pipeline that was executed, along with individual log output for each step in the pipeline. [discrete] === Viewing ciGroup/test Logs diff --git a/docs/setup/connect-to-elasticsearch.asciidoc b/docs/setup/connect-to-elasticsearch.asciidoc index 1d698e9087937..9e1ee62f093fe 100644 --- a/docs/setup/connect-to-elasticsearch.asciidoc +++ b/docs/setup/connect-to-elasticsearch.asciidoc @@ -55,7 +55,7 @@ https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} Client docu If you are running {kib} on our hosted {es} Service, click *View deployment details* on the *Integrations* view -to verify your {es} endpoint and Cloud ID, and create API keys for integestion. +to verify your {es} endpoint and Cloud ID, and create API keys for integration. [float] === Add sample data diff --git a/docs/user/production-considerations/task-manager-production-considerations.asciidoc b/docs/user/production-considerations/task-manager-production-considerations.asciidoc index 672c310f138e9..28c5f6e4f14c8 100644 --- a/docs/user/production-considerations/task-manager-production-considerations.asciidoc +++ b/docs/user/production-considerations/task-manager-production-considerations.asciidoc @@ -101,7 +101,7 @@ Scaling {kib} instances horizontally requires a higher degree of coordination, w A recommended strategy is to follow these steps: 1. Produce a <> as a guide to provisioning as many {kib} instances as needed. Include any growth in tasks that you predict experiencing in the near future, and a buffer to better address ad-hoc tasks. -2. After provisioning a deployment, assess whether the provisioned {kib} instances achieve the required throughput by evaluating the <> as described in <>. +2. After provisioning a deployment, assess whether the provisioned {kib} instances achieve the required throughput by evaluating the <> as described in <>. 3. If the throughput is insufficient, and {kib} instances exhibit low resource usage, incrementally scale vertically while <> the impact of these changes. 4. If the throughput is insufficient, and {kib} instances are exhibiting high resource usage, incrementally scale horizontally by provisioning new {kib} instances and reassess. diff --git a/docs/user/production-considerations/task-manager-troubleshooting.asciidoc b/docs/user/production-considerations/task-manager-troubleshooting.asciidoc index a22d46902f54c..606dd3c8a24ee 100644 --- a/docs/user/production-considerations/task-manager-troubleshooting.asciidoc +++ b/docs/user/production-considerations/task-manager-troubleshooting.asciidoc @@ -412,7 +412,7 @@ This assessment is based on the following: * Comparing the `last_successful_poll` to the `timestamp` (value of `2021-02-16T11:38:10.077Z`) at the root, where you can see the last polling cycle took place 1 second before the monitoring stats were exposed by the health monitoring API. * Comparing the `last_polling_delay` to the `timestamp` (value of `2021-02-16T11:38:10.077Z`) at the root, where you can see the last polling cycle delay took place 2 days ago, suggesting {kib} instances are not conflicting often. -* The `p50` of the `duration` shows that at least 50% of polling cycles take, at most, 13 millisconds to complete. +* The `p50` of the `duration` shows that at least 50% of polling cycles take, at most, 13 milliseconds to complete. * Evaluating the `result_frequency_percent_as_number`: ** 80% of the polling cycles completed without claiming any tasks (suggesting that there aren't any overdue tasks). ** 20% completed with Task Manager claiming tasks that were then executed. @@ -508,7 +508,7 @@ For details on achieving higher throughput by adjusting your scaling strategy, s Tasks run for too long, overrunning their schedule *Diagnosis*: -The <> theory analyzed a hypothetical scenario where both drift and load were unusually high. +The <> theory analyzed a hypothetical scenario where both drift and load were unusually high. Suppose an alternate scenario, where `drift` is high, but `load` is not, such as the following: @@ -688,7 +688,7 @@ Keep in mind that these stats give you a glimpse at a moment in time, and even t [[task-manager-health-evaluate-the-workload]] ===== Evaluate the Workload -Predicting the required throughput a deplyment might need to support Task Manager is difficult, as features can schedule an unpredictable number of tasks at a variety of scheduled cadences. +Predicting the required throughput a deployment might need to support Task Manager is difficult, as features can schedule an unpredictable number of tasks at a variety of scheduled cadences. <> provides statistics that make it easier to monitor the adequacy of the existing throughput. By evaluating the workload, the required throughput can be estimated, which is used when following the Task Manager <>.