Skip to content
This repository has been archived by the owner on Mar 31, 2024. It is now read-only.

Commit

Permalink
Fixed some typos (elastic#125802) (elastic#126733)
Browse files Browse the repository at this point in the history
Co-authored-by: Kibana Machine <[email protected]>
Co-authored-by: Kaarina Tungseth <[email protected]>
(cherry picked from commit fc6897c)

Co-authored-by: Tobias Stadler <[email protected]>
  • Loading branch information
kibanamachine and tobiasstadler authored Mar 2, 2022
1 parent ac7da14 commit 626e08d
Showing 12 changed files with 14 additions and 14 deletions.
2 changes: 1 addition & 1 deletion docs/api/saved-objects/resolve_import_errors.asciidoc
Original file line number Diff line number Diff line change
@@ -25,7 +25,7 @@ To resolve errors, you can:
==== Path parameters

`space_id`::
(Optional, string) An identifier for the <<xpack-spaces,space>>. When `space_id` is unspecfied in the URL, the default space is used.
(Optional, string) An identifier for the <<xpack-spaces,space>>. When `space_id` is unspecified in the URL, the default space is used.

[[saved-objects-api-resolve-import-errors-query-params]]
==== Query parameters
Original file line number Diff line number Diff line change
@@ -68,7 +68,7 @@ Execute the <<spaces-api-copy-saved-objects,copy saved objects to space API>>, w
`id`::::
(Required, string) The saved object ID.
`overwrite`::::
(Required, boolean) When set to `true`, the saved object from the source space (desigated by the <<spaces-api-resolve-copy-saved-objects-conflicts-path-params, `space_id` path parameter>>) overwrites the conflicting object in the destination space. When set to `false`, this does nothing.
(Required, boolean) When set to `true`, the saved object from the source space (designated by the <<spaces-api-resolve-copy-saved-objects-conflicts-path-params, `space_id` path parameter>>) overwrites the conflicting object in the destination space. When set to `false`, this does nothing.
`destinationId`::::
(Optional, string) Specifies the destination ID that the copied object should have, if different from the current ID.
`ignoreMissingReferences`:::
2 changes: 1 addition & 1 deletion docs/api/upgrade-assistant/default-field.asciidoc
Original file line number Diff line number Diff line change
@@ -26,7 +26,7 @@ GET /api/upgrade_assistant/add_query_default_field/myIndex
// KIBANA

<1> A required array of {es} field types that generate the list of fields.
<2> An optional array of additional field names, dot-deliminated.
<2> An optional array of additional field names, dot-delimited.

To add the `index.query.default_field` index setting to the specified index, {kib} generates an array of all fields from the index mapping.
The fields contain the types specified in `fieldTypes`. {kib} appends any other fields specified in `otherFields` to the array of default fields.
2 changes: 1 addition & 1 deletion docs/apm/service-maps.asciidoc
Original file line number Diff line number Diff line change
@@ -84,7 +84,7 @@ image:apm/images/red-service.png[APM red service]:: Max anomaly score **≥75**.
[role="screenshot"]
image::apm/images/apm-service-map-anomaly.png[Example view of anomaly scores on service maps in the APM app]

If an anomaly has been detected, click *view anomalies* to view the anomaly detection metric viewier in the Machine learning app.
If an anomaly has been detected, click *view anomalies* to view the anomaly detection metric viewer in the Machine learning app.
This time series analysis will display additional details on the severity and time of the detected anomalies.

To learn how to create a machine learning job, see <<machine-learning-integration,machine learning integration>>.
Original file line number Diff line number Diff line change
@@ -221,7 +221,7 @@ These are the contracts exposed by the core services for each lifecycle:
[cols=",,",options="header",]
|===
|lifecycle |server contract|browser contract
|_contructor_
|_constructor_
|{kib-repo}blob/{branch}/docs/development/core/server/kibana-plugin-core-server.plugininitializercontext.md[PluginInitializerContext]
|{kib-repo}blob/{branch}/docs/development/core/public/kibana-plugin-core-public.plugininitializercontext.md[PluginInitializerContext]

2 changes: 1 addition & 1 deletion docs/developer/best-practices/typescript.asciidoc
Original file line number Diff line number Diff line change
@@ -51,7 +51,7 @@ Additionally, in order to migrate into project refs, you also need to make sure
],
"references": [
{ "path": "../../core/tsconfig.json" },
// add references to other TypeScript projects your plugin dependes on
// add references to other TypeScript projects your plugin depends on
]
}
----
Original file line number Diff line number Diff line change
@@ -137,4 +137,4 @@ If you only want to run the build once you can run:
node scripts/build_kibana_platform_plugins --validate-limits --focus {pluginId}
-----------

This command needs to apply production optimizations to get the right sizes, which means that the optimizer will take significantly longer to run and on most developmer machines will consume all of your machines resources for 20 minutes or more. If you'd like to multi-task while this is running you might need to limit the number of workers using the `--max-workers` flag.
This command needs to apply production optimizations to get the right sizes, which means that the optimizer will take significantly longer to run and on most developer machines will consume all of your machines resources for 20 minutes or more. If you'd like to multi-task while this is running you might need to limit the number of workers using the `--max-workers` flag.
Original file line number Diff line number Diff line change
@@ -31,7 +31,7 @@ node scripts/docs.js --open

REST APIs should be documented using the following recommended formats:

* https://raw.githubusercontent.com/elastic/docs/master/shared/api-ref-ex.asciidoc[API doc templaate]
* https://raw.githubusercontent.com/elastic/docs/master/shared/api-ref-ex.asciidoc[API doc template]
* https://raw.githubusercontent.com/elastic/docs/master/shared/api-definitions-ex.asciidoc[API object definition template]

[discrete]
Original file line number Diff line number Diff line change
@@ -22,7 +22,7 @@ image::images/job_view.png[Jenkins job view showing a test failure]
1. *Git Changes:* the list of commits that were in this build which weren't in the previous build. For Pull Requests this list is calculated by comparing against the most recent Pull Request which was tested, it is not limited to build for this specific Pull Request, so it's not very useful.
2. *Test Results:* A link to the test results screen, and shortcuts to the failed tests. Functional tests capture and store the log output from each specific test, and make it visible at these links. For other test runners only the error message is visible and log output must be tracked down in the *Pipeline Steps*.
3. *Google Cloud Storage (GCS) Upload Report:* Link to the screen which lists out the artifacts uploaded to GCS during this job execution.
4. *Pipeline Steps:*: A breakdown of the pipline that was executed, along with individual log output for each step in the pipeline.
4. *Pipeline Steps:*: A breakdown of the pipeline that was executed, along with individual log output for each step in the pipeline.

[discrete]
=== Viewing ciGroup/test Logs
2 changes: 1 addition & 1 deletion docs/setup/connect-to-elasticsearch.asciidoc
Original file line number Diff line number Diff line change
@@ -55,7 +55,7 @@ https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} Client docu

If you are running {kib} on our hosted {es} Service,
click *View deployment details* on the *Integrations* view
to verify your {es} endpoint and Cloud ID, and create API keys for integestion.
to verify your {es} endpoint and Cloud ID, and create API keys for integration.

[float]
=== Add sample data
Original file line number Diff line number Diff line change
@@ -101,7 +101,7 @@ Scaling {kib} instances horizontally requires a higher degree of coordination, w
A recommended strategy is to follow these steps:

1. Produce a <<task-manager-rough-throughput-estimation,rough throughput estimate>> as a guide to provisioning as many {kib} instances as needed. Include any growth in tasks that you predict experiencing in the near future, and a buffer to better address ad-hoc tasks.
2. After provisioning a deployment, assess whether the provisioned {kib} instances achieve the required throughput by evaluating the <<task-manager-health-monitoring>> as described in <<task-manager-theory-insufficient-throughput, Insufficient throughtput to handle the scheduled workload>>.
2. After provisioning a deployment, assess whether the provisioned {kib} instances achieve the required throughput by evaluating the <<task-manager-health-monitoring>> as described in <<task-manager-theory-insufficient-throughput, Insufficient throughput to handle the scheduled workload>>.
3. If the throughput is insufficient, and {kib} instances exhibit low resource usage, incrementally scale vertically while <<kibana-page,monitoring>> the impact of these changes.
4. If the throughput is insufficient, and {kib} instances are exhibiting high resource usage, incrementally scale horizontally by provisioning new {kib} instances and reassess.

Original file line number Diff line number Diff line change
@@ -412,7 +412,7 @@ This assessment is based on the following:

* Comparing the `last_successful_poll` to the `timestamp` (value of `2021-02-16T11:38:10.077Z`) at the root, where you can see the last polling cycle took place 1 second before the monitoring stats were exposed by the health monitoring API.
* Comparing the `last_polling_delay` to the `timestamp` (value of `2021-02-16T11:38:10.077Z`) at the root, where you can see the last polling cycle delay took place 2 days ago, suggesting {kib} instances are not conflicting often.
* The `p50` of the `duration` shows that at least 50% of polling cycles take, at most, 13 millisconds to complete.
* The `p50` of the `duration` shows that at least 50% of polling cycles take, at most, 13 milliseconds to complete.
* Evaluating the `result_frequency_percent_as_number`:
** 80% of the polling cycles completed without claiming any tasks (suggesting that there aren't any overdue tasks).
** 20% completed with Task Manager claiming tasks that were then executed.
@@ -508,7 +508,7 @@ For details on achieving higher throughput by adjusting your scaling strategy, s
Tasks run for too long, overrunning their schedule

*Diagnosis*:
The <<task-manager-theory-insufficient-throughput,Insufficient throughtput to handle the scheduled workload>> theory analyzed a hypothetical scenario where both drift and load were unusually high.
The <<task-manager-theory-insufficient-throughput,Insufficient throughput to handle the scheduled workload>> theory analyzed a hypothetical scenario where both drift and load were unusually high.

Suppose an alternate scenario, where `drift` is high, but `load` is not, such as the following:

@@ -688,7 +688,7 @@ Keep in mind that these stats give you a glimpse at a moment in time, and even t
[[task-manager-health-evaluate-the-workload]]
===== Evaluate the Workload

Predicting the required throughput a deplyment might need to support Task Manager is difficult, as features can schedule an unpredictable number of tasks at a variety of scheduled cadences.
Predicting the required throughput a deployment might need to support Task Manager is difficult, as features can schedule an unpredictable number of tasks at a variety of scheduled cadences.

<<task-manager-health-monitoring>> provides statistics that make it easier to monitor the adequacy of the existing throughput.
By evaluating the workload, the required throughput can be estimated, which is used when following the Task Manager <<task-manager-scaling-guidance>>.

0 comments on commit 626e08d

Please sign in to comment.