Skip to content

Commit

Permalink
[DOCS] Adds transforms to Elasticsearch book (elastic#46846) (elastic…
Browse files Browse the repository at this point in the history
  • Loading branch information
lcawl authored Sep 25, 2019
1 parent 93fcd23 commit d5f396f
Show file tree
Hide file tree
Showing 12 changed files with 79 additions and 75 deletions.
6 changes: 5 additions & 1 deletion docs/reference/data-rollup-transform.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,12 @@
* <<xpack-rollup,Rolling up your historical data>>
+
include::rollup/index.asciidoc[tag=rollup-intro]
* {stack-ov}/ml-dataframes.html[Transforming your data]
* <<transforms,Transforming your data>>
+
include::transform/index.asciidoc[tag=transform-intro]

--

include::rollup/index.asciidoc[]

include::transform/index.asciidoc[]
4 changes: 2 additions & 2 deletions docs/reference/transform/api-quickref.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[df-api-quickref]]
== API quick reference
[[transform-api-quickref]]
=== API quick reference

All {transform} endpoints have the following base:

Expand Down
2 changes: 0 additions & 2 deletions docs/reference/transform/apis/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
[[transform-apis]]
== {transform-cap} APIs

See also {stack-ov}/ml-dataframes.html[{transforms-cap}].

* <<put-transform>>
* <<update-transform>>
* <<delete-transform>>
Expand Down
3 changes: 1 addition & 2 deletions docs/reference/transform/apis/put-transform.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,7 @@ entities are defined by the set of `group_by` fields in the `pivot` object. You
can also think of the destination index as a two-dimensional tabular data
structure (known as a {dataframe}). The ID for each document in the
{dataframe} is generated from a hash of the entity, so there is a unique row
per entity. For more information, see
{stack-ov}/ml-dataframes.html[{transforms-cap}].
per entity. For more information, see <<transforms>>.

When the {transform} is created, a series of validations occur to
ensure its success. For example, there is a check for the existence of the
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/transform/checkpoints.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-transform-checkpoints]]
== How {transform} checkpoints work
[[transform-checkpoints]]
=== How {transform} checkpoints work
++++
<titleabbrev>How checkpoints work</titleabbrev>
++++
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
[role="xpack"]
[testenv="basic"]
[[ecommerce-dataframes]]
=== Transforming the eCommerce sample data
[[ecommerce-transforms]]
=== Tutorial: Transforming the eCommerce sample data

beta[]

<<ml-dataframes,{transforms-cap}>> enable you to retrieve information
<<transforms,{transforms-cap}>> enable you to retrieve information
from an {es} index, transform it, and store it in another index. Let's use the
{kibana-ref}/add-sample-data.html[{kib} sample data] to demonstrate how you can
pivot and summarize your data with {transforms}.
Expand All @@ -23,7 +23,9 @@ You also need `read` and `view_index_metadata` index privileges on the source
index and `read`, `create_index`, and `index` privileges on the destination
index.

For more information, see <<security-privileges>> and <<built-in-roles>>.
For more information, see
{stack-ov}/security-privileges.html[Security privileges] and
{stack-ov}/built-in-roles.html[Built-in roles].
--

. Choose your _source index_.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[dataframe-examples]]
== {transform-cap} examples
[[transform-examples]]
=== {transform-cap} examples
++++
<titleabbrev>Examples</titleabbrev>
++++
Expand All @@ -12,17 +12,14 @@ These examples demonstrate how to use {transforms} to derive useful
insights from your data. All the examples use one of the
{kibana-ref}/add-sample-data.html[{kib} sample datasets]. For a more detailed,
step-by-step example, see
<<ecommerce-dataframes,Transforming your data with {dataframes}>>.
<<ecommerce-transforms>>.

* <<ecommerce-dataframes>>
* <<example-best-customers>>
* <<example-airline>>
* <<example-clientips>>

include::ecommerce-example.asciidoc[]

[[example-best-customers]]
=== Finding your best customers
==== Finding your best customers

In this example, we use the eCommerce orders sample dataset to find the customers
who spent the most in our hypothetical webshop. Let's transform the data such
Expand Down Expand Up @@ -106,7 +103,7 @@ navigate data from a customer centric perspective. In some cases, it can even
make creating visualizations much simpler.

[[example-airline]]
=== Finding air carriers with the most delays
==== Finding air carriers with the most delays

In this example, we use the Flights sample dataset to find out which air carrier
had the most delays. First, we filter the source data such that it excludes all
Expand Down Expand Up @@ -193,7 +190,7 @@ or flight stats for any of the featured destination or origin airports.


[[example-clientips]]
=== Finding suspicious client IPs by using scripted metrics
==== Finding suspicious client IPs by using scripted metrics

With {transforms}, you can use
{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[scripted
Expand Down
32 changes: 16 additions & 16 deletions docs/reference/transform/index.asciidoc
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
[role="xpack"]
[[ml-dataframes]]
= Transforming data

[partintro]
--
[[transforms]]
== Transforming data

// tag::transform-intro[]
{transforms-cap} enable you to convert existing {es} indices into summarized
indices, which provide opportunities for new insights and analytics. For example,
you can use {transforms} to pivot your data into entity-centric indices that
summarize the behavior of users or sessions or other entities in your data.
indices, which provide opportunities for new insights and analytics.
// end::transform-intro[]
For example, you can use {transforms} to pivot your data into entity-centric
indices that summarize the behavior of users or sessions or other entities in
your data.

* <<ml-transform-overview>>
* <<ml-transforms-usage>>
* <<df-api-quickref>>
* <<dataframe-examples>>
* <<dataframe-troubleshooting>>
* <<dataframe-limitations>>
--
* <<transform-overview>>
* <<transform-usage>>
* <<transform-api-quickref>>
* <<transform-examples>>
* <<transform-troubleshooting>>
* <<transform-limitations>>

include::overview.asciidoc[]
include::usage.asciidoc[]
include::checkpoints.asciidoc[]
include::api-quickref.asciidoc[]
include::dataframe-examples.asciidoc[]
include::ecommerce-tutorial.asciidoc[]
include::examples.asciidoc[]
include::troubleshooting.asciidoc[]
include::limitations.asciidoc[]
60 changes: 30 additions & 30 deletions docs/reference/transform/limitations.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[dataframe-limitations]]
== {transform-cap} limitations
[[transform-limitations]]
=== {transform-cap} limitations
[subs="attributes"]
++++
<titleabbrev>Limitations</titleabbrev>
Expand All @@ -12,8 +12,8 @@ The following limitations and known problems apply to the 7.4 release of
the Elastic {dataframe} feature:

[float]
[[df-compatibility-limitations]]
=== Beta {transforms} do not have guaranteed backwards or forwards compatibility
[[transform-compatibility-limitations]]
==== Beta {transforms} do not have guaranteed backwards or forwards compatibility

Whilst {transforms} are beta, it is not guaranteed that a
{transform} created in a previous version of the {stack} will be able
Expand All @@ -25,8 +25,8 @@ destination index. This is a normal {es} index and is not affected by the beta
status.

[float]
[[df-ui-limitation]]
=== {dataframe-cap} UI will not work during a rolling upgrade from 7.2
[[transform-ui-limitation]]
==== {dataframe-cap} UI will not work during a rolling upgrade from 7.2

If your cluster contains mixed version nodes, for example during a rolling
upgrade from 7.2 to a newer version, and {transforms} have been
Expand All @@ -35,22 +35,22 @@ have been upgraded to the newer version before using the {dataframe} UI.


[float]
[[df-datatype-limitations]]
=== {dataframe-cap} data type limitation
[[transform-datatype-limitations]]
==== {dataframe-cap} data type limitation

{dataframes-cap} do not (yet) support fields containing arrays – in the UI or
the API. If you try to create one, the UI will fail to show the source index
table.

[float]
[[df-ccs-limitations]]
=== {ccs-cap} is not supported
[[transform-ccs-limitations]]
==== {ccs-cap} is not supported

{ccs-cap} is not supported for {transforms}.

[float]
[[df-kibana-limitations]]
=== Up to 1,000 {transforms} are supported
[[transform-kibana-limitations]]
==== Up to 1,000 {transforms} are supported

A single cluster will support up to 1,000 {transforms}.
When using the
Expand All @@ -59,8 +59,8 @@ When using the
enumerate through the full list.

[float]
[[df-aggresponse-limitations]]
=== Aggregation responses may be incompatible with destination index mappings
[[transform-aggresponse-limitations]]
==== Aggregation responses may be incompatible with destination index mappings

When a {transform} is first started, it will deduce the mappings
required for the destination index. This process is based on the field types of
Expand All @@ -77,8 +77,8 @@ workaround, you may define custom mappings prior to starting the
{ref}/indices-templates.html[define an index template].

[float]
[[df-batch-limitations]]
=== Batch {transforms} may not account for changed documents
[[transform-batch-limitations]]
==== Batch {transforms} may not account for changed documents

A batch {transform} uses a
{ref}/search-aggregations-bucket-composite-aggregation.html[composite aggregation]
Expand All @@ -88,8 +88,8 @@ do not yet support a search context, therefore if the source data is changed
results may not include these changes.

[float]
[[df-consistency-limitations]]
=== {cdataframe-cap} consistency does not account for deleted or updated documents
[[transform-consistency-limitations]]
==== {cdataframe-cap} consistency does not account for deleted or updated documents

While the process for {transforms} allows the continual recalculation
of the {transform} as new data is being ingested, it does also have
Expand All @@ -114,16 +114,16 @@ updated when viewing the {dataframe} destination index.


[float]
[[df-deletion-limitations]]
=== Deleting a {transform} does not delete the {dataframe} destination index or {kib} index pattern
[[transform-deletion-limitations]]
==== Deleting a {transform} does not delete the {dataframe} destination index or {kib} index pattern

When deleting a {transform} using `DELETE _data_frame/transforms/index`
neither the {dataframe} destination index nor the {kib} index pattern, should
one have been created, are deleted. These objects must be deleted separately.

[float]
[[df-aggregation-page-limitations]]
=== Handling dynamic adjustment of aggregation page size
[[transform-aggregation-page-limitations]]
==== Handling dynamic adjustment of aggregation page size

During the development of {transforms}, control was favoured over
performance. In the design considerations, it is preferred for the
Expand Down Expand Up @@ -153,8 +153,8 @@ requested has been reduced to its minimum, then the {transform} will
be set to a failed state.

[float]
[[df-dynamic-adjustments-limitations]]
=== Handling dynamic adjustments for many terms
[[transform-dynamic-adjustments-limitations]]
==== Handling dynamic adjustments for many terms

For each checkpoint, entities are identified that have changed since the last
time the check was performed. This list of changed entities is supplied as a
Expand All @@ -176,8 +176,8 @@ Using smaller values for `max_page_search_size` may result in a longer duration
for the {transform} checkpoint to complete.

[float]
[[df-scheduling-limitations]]
=== {cdataframe-cap} scheduling limitations
[[transform-scheduling-limitations]]
==== {cdataframe-cap} scheduling limitations

A {cdataframe} periodically checks for changes to source data. The functionality
of the scheduler is currently limited to a basic periodic timer which can be
Expand All @@ -188,8 +188,8 @@ search/index operations has other users in your cluster. Also note that retries
occur at `frequency` interval.

[float]
[[df-failed-limitations]]
=== Handling of failed {transforms}
[[transform-failed-limitations]]
==== Handling of failed {transforms}

Failed {transforms} remain as a persistent task and should be handled
appropriately, either by deleting it or by resolving the root cause of the
Expand All @@ -199,8 +199,8 @@ When using the API to delete a failed {transform}, first stop it using
`_stop?force=true`, then delete it.

[float]
[[df-availability-limitations]]
=== {cdataframes-cap} may give incorrect results if documents are not yet available to search
[[transform-availability-limitations]]
==== {cdataframes-cap} may give incorrect results if documents are not yet available to search

After a document is indexed, there is a very small delay until it is available
to search.
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/transform/overview.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-transform-overview]]
== {transform-cap} overview
[[transform-overview]]
=== {transform-cap} overview
++++
<titleabbrev>Overview</titleabbrev>
++++
Expand Down
10 changes: 7 additions & 3 deletions docs/reference/transform/troubleshooting.asciidoc
Original file line number Diff line number Diff line change
@@ -1,15 +1,19 @@
[role="xpack"]
[testenv="basic"]
[[dataframe-troubleshooting]]
== Troubleshooting {transforms}
[[transform-troubleshooting]]
=== Troubleshooting {transforms}
[subs="attributes"]
++++
<titleabbrev>Troubleshooting</titleabbrev>
++++

Use the information in this section to troubleshoot common problems.

include::{stack-repo-dir}/help.asciidoc[tag=get-help]
For issues that you cannot fix yourself … we’re here to help.
If you are an existing Elastic customer with a support contract, please create
a ticket in the
https://support.elastic.co/customers/s/login/[Elastic Support portal].
Or post in the https://discuss.elastic.co/[Elastic forum].

If you encounter problems with your {transforms}, you can gather more
information from the following files and APIs:
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/transform/usage.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[role="xpack"]
[testenv="basic"]
[[ml-transforms-usage]]
== When to use {transforms}
[[transform-usage]]
=== When to use {transforms}

{es} aggregations are a powerful and flexible feature that enable you to
summarize and retrieve complex insights about your data. You can summarize
Expand Down

0 comments on commit d5f396f

Please sign in to comment.