Skip to content

Commit

Permalink
[DOCS] Fix typos (elastic#83895) (elastic#83973)
Browse files Browse the repository at this point in the history
(cherry picked from commit e3deacf)

Co-authored-by: Tobias Stadler <[email protected]>
  • Loading branch information
jrodewig and tobiasstadler authored Feb 15, 2022
1 parent 664d98b commit c142ef1
Show file tree
Hide file tree
Showing 16 changed files with 16 additions and 16 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The following variables are available in all watcher contexts.
The id of the watch.

`ctx['id']` (`String`, read-only)::
The server generated unique identifer for the run watch.
The server generated unique identifier for the run watch.

`ctx['metadata']` (`Map`, read-only)::
Metadata can be added to the top level of the watch definition. This
Expand Down
2 changes: 1 addition & 1 deletion docs/plugins/repository.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ functionality in Elasticsearch by adding repositories backed by the cloud or
by distributed file systems:

[discrete]
==== Offical repository plugins
==== Official repository plugins

NOTE: Support for S3, GCS and Azure repositories is now bundled in {es} by
default.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,7 @@ The regex above is easier to understand as:
[discrete]
=== Definition

The `pattern` anlayzer consists of:
The `pattern` analyzer consists of:

Tokenizer::
* <<analysis-pattern-tokenizer,Pattern Tokenizer>>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ The filter produces the following tokens.

The API response contains the position and offsets of each output token. Note
the `predicate_token_filter` filter does not change the tokens' original
positions or offets.
positions or offsets.

.*Response*
[%collapsible]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/cat/trainedmodel.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ The estimated heap size to keep the trained model in memory.

`id`:::
(Default)
Idetifier for the trained model.
Identifier for the trained model.

`ingest.count`, `ic`, `ingestCount`:::
The total number of documents that are processed by the model.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/cluster/stats.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -1096,7 +1096,7 @@ Total size of all file stores across all selected nodes.

`total_in_bytes`::
(integer)
Total size, in bytes, of all file stores across all seleced nodes.
Total size, in bytes, of all file stores across all selected nodes.

`free`::
(<<byte-units, byte units>>)
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/commands/keystore.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ password.
[[show-keystore-value]]
==== Show settings in the keystore

To display the value of a setting in the keystorem use the `show` command:
To display the value of a setting in the keystore use the `show` command:

[source,sh]
----------------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/graph/explore.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ graph as vertices. For example:
field::: Identifies a field in the documents of interest.
include::: Identifies the terms of interest that form the starting points
from which you want to spider out. You do not have to specify a seed query
if you specify an include clause. The include clause implicitly querys for
if you specify an include clause. The include clause implicitly queries for
documents that contain any of the listed terms listed.
In addition to specifying a simple array of strings, you can also pass
objects with `term` and `boost` values to boost matches on particular terms.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/how-to/recipes/scoring.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ While both options would return similar scores, there are trade-offs:
<<query-dsl-script-score-query,script_score>> provides a lot of flexibility,
enabling you to combine the text relevance score with static signals as you
prefer. On the other hand, the <<rank-feature,`rank_feature` query>> only
exposes a couple ways to incorporate static signails into the score. However,
exposes a couple ways to incorporate static signals into the score. However,
it relies on the <<rank-feature,`rank_feature`>> and
<<rank-features,`rank_features`>> fields, which index values in a special way
that allows the <<query-dsl-rank-feature-query,`rank_feature` query>> to skip
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ TIP: {ess-skip-section}
====
*Details* +
In previous versions of {es}, in order to register a snapshot repository
backed by Amazon S3, Google Cloud Storge (GCS) or Microsoft Azure Blob
backed by Amazon S3, Google Cloud Storage (GCS) or Microsoft Azure Blob
Storage, you first had to install the corresponding Elasticsearch plugin,
for example `repository-s3`. These plugins are now included in {es} by
default.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
*Details* +
To reduce the dependency of the JDBC driver onto Elasticsearch classes, the JDBC driver returns geometry data
as strings using the WKT (well-known text) format instead of classes from the `org.elasticsearch.geometry`.
Users can choose the geometry library desired to convert the string represantion into a full-blown objects
Users can choose the geometry library desired to convert the string representation into a full-blown objects
either such as the `elasticsearch-geo` library (which returned the object `org.elasticsearch.geo` as before),
jts or spatial4j.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ formatting is based on the {kib} settings.
The peak number of bytes of memory ever used by the model.
====

==== _Data delay has occured_
==== _Data delay has occurred_

`context.message`::
A preconstructed message for the rule.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/ml/ml-shared.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -995,7 +995,7 @@ Tokenize with special tokens. The tokens typically included in MPNet-style token
end::inference-config-nlp-tokenization-mpnet-with-special-tokens[]

tag::inference-config-nlp-vocabulary[]
The configuration for retreiving the vocabulary of the model. The vocabulary is
The configuration for retrieving the vocabulary of the model. The vocabulary is
then used at inference time. This information is usually provided automatically
by storing vocabulary in a known, internally managed index.
end::inference-config-nlp-vocabulary[]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/modules/discovery/bootstrapping.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ configuration. If each node name is a fully-qualified domain name such as
`master-a.example.com` then you must use fully-qualified domain names in the
`cluster.initial_master_nodes` list too; conversely if your node names are bare
hostnames (without the `.example.com` suffix) then you must use bare hostnames
in the `cluster.initial_master_nodes` list. If you use a mix of fully-qualifed
in the `cluster.initial_master_nodes` list. If you use a mix of fully-qualified
and bare hostnames, or there is some other mismatch between `node.name` and
`cluster.initial_master_nodes`, then the cluster will not form successfully and
you will see log messages like the following.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/snapshot-restore/apis/put-repo-api.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ Repository type.
Other repository types are available through official plugins:
`hfds`:: {plugins}/repository-hdfs.html[Hadoop Distributed File System (HDFS) repository]
`hdfs`:: {plugins}/repository-hdfs.html[Hadoop Distributed File System (HDFS) repository]
====

[[put-snapshot-repo-api-settings-param]]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/sql/limitations.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

[discrete]
[[large-parsing-trees]]
=== Large queries may throw `ParsingExpection`
=== Large queries may throw `ParsingException`

Extremely large queries can consume too much memory during the parsing phase, in which case the {es-sql} engine will
abort parsing and throw an error. In such cases, consider reducing the query to a smaller size by potentially
Expand Down

0 comments on commit c142ef1

Please sign in to comment.