Skip to content

Commit

Permalink
Merge branch 'master' into fix-exit-full-screen-button
Browse files Browse the repository at this point in the history
  • Loading branch information
kibanamachine authored Apr 13, 2021
2 parents 6269d41 + ba091c0 commit e401d6f
Show file tree
Hide file tree
Showing 315 changed files with 9,886 additions and 5,348 deletions.
14 changes: 9 additions & 5 deletions .bazelrc.common
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,13 @@
build --experimental_guard_against_concurrent_changes
run --experimental_guard_against_concurrent_changes
test --experimental_guard_against_concurrent_changes
query --experimental_guard_against_concurrent_changes

## Cache action outputs on disk so they persist across output_base and bazel shutdown (eg. changing branches)
build --disk_cache=~/.bazel-cache/disk-cache
common --disk_cache=~/.bazel-cache/disk-cache

## Bazel repo cache settings
build --repository_cache=~/.bazel-cache/repository-cache
common --repository_cache=~/.bazel-cache/repository-cache

# Bazel will create symlinks from the workspace directory to output artifacts.
# Build results will be placed in a directory called "bazel-bin"
Expand All @@ -35,13 +36,16 @@ build --experimental_inprocess_symlink_creation
# Incompatible flags to run with
build --incompatible_no_implicit_file_export
build --incompatible_restrict_string_escapes
query --incompatible_no_implicit_file_export
query --incompatible_restrict_string_escapes

# Log configs
## different from default
common --color=yes
build --show_task_finish
build --noshow_progress
common --noshow_progress
common --show_task_finish
build --noshow_loading_progress
query --noshow_loading_progress
build --show_result=0

# Specifies desired output mode for running tests.
Expand Down Expand Up @@ -82,7 +86,7 @@ test:debug --test_output=streamed --test_strategy=exclusive --test_timeout=9999
run:debug --define=VERBOSE_LOGS=1 -- --node_options=--inspect-brk
# The following option will change the build output of certain rules such as terser and may not be desirable in all cases
# It will also output both the repo cache and action cache to a folder inside the repo
build:debug --compilation_mode=dbg --show_result=1
build:debug --compilation_mode=dbg --show_result=0 --noshow_loading_progress --noshow_progress --show_task_finish

# Turn off legacy external runfiles
# This prevents accidentally depending on this feature, which Bazel will remove.
Expand Down
3 changes: 0 additions & 3 deletions WORKSPACE.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,6 @@ node_repositories(
# NOTE: FORCE_COLOR env var forces colors on non tty mode
yarn_install(
name = "npm",
environment = {
"FORCE_COLOR": "True",
},
package_json = "//:package.json",
yarn_lock = "//:yarn.lock",
data = [
Expand Down
5 changes: 4 additions & 1 deletion docs/developer/getting-started/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,8 @@ yarn kbn bootstrap --force-install

(You can also run `yarn kbn` to see the other available commands. For
more info about this tool, see
{kib-repo}tree/{branch}/packages/kbn-pm[{kib-repo}tree/{branch}/packages/kbn-pm].)
{kib-repo}tree/{branch}/packages/kbn-pm[{kib-repo}tree/{branch}/packages/kbn-pm]. If you want more
information about how to actively develop over packages please read <<monorepo-packages>>)

When switching branches which use different versions of npm packages you
may need to run:
Expand Down Expand Up @@ -169,3 +170,5 @@ include::debugging.asciidoc[leveloffset=+1]
include::building-kibana.asciidoc[leveloffset=+1]

include::development-plugin-resources.asciidoc[leveloffset=+1]

include::monorepo-packages.asciidoc[leveloffset=+1]
66 changes: 66 additions & 0 deletions docs/developer/getting-started/monorepo-packages.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
[[monorepo-packages]]
== {kib} Monorepo Packages

Currently {kib} works as a monorepo composed by a core, plugins and packages.
The latest are located in a folder called `packages` and are pieces of software that
composes a set of features that can be isolated and reused across the entire repository.
They are also supposed to be able to imported just like any other `node_module`.

Previously we relied solely on `@kbn/pm` to manage the development tools of those packages, but we are
now in the middle of migrating those responsibilities into Bazel. Every package already migrated
will contain in its root folder a `BUILD.bazel` file and other `build` and `watching` strategies should be used.

Remember that any time you need to make sure the monorepo is ready to be used just run:

[source,bash]
----
yarn kbn bootstrap
----

[discrete]
=== Building Non Bazel Packages

Non Bazel packages can be built independently with

[source,bash]
----
yarn kbn run build -i PACKAGE_NAME
----

[discrete]
=== Watching Non Bazel Packages

Non Bazel packages can be watched independently with

[source,bash]
----
yarn kbn watch -i PACKAGE_NAME
----

[discrete]
=== Building Bazel Packages

Bazel packages are built as a whole for now. You can use:

[source,bash]
----
yarn kbn build-bazel
----

[discrete]
=== Watching Bazel Packages

Bazel packages are watched as a whole for now. You can use:

[source,bash]
----
yarn kbn watch-bazel
----


[discrete]
=== List of Already Migrated Packages to Bazel

- @elastic/datemath


5 changes: 5 additions & 0 deletions docs/management/index-patterns.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,11 @@ pattern:
*:logstash-*
```

You can use exclusions to exclude indices that might contain mapping errors.
To match indices starting with `logstash-`, and exclude those starting with `logstash-old` from
all clusters having a name starting with `cluster_`, you can use `cluster_*:logstash-*,cluster*:logstash-old*`.
To exclude a cluster, use `cluster_*:logstash-*,cluster_one:-*`.

Once an index pattern is configured using the {ccs} syntax, all searches and
aggregations using that index pattern in {kib} take advantage of {ccs}.

Expand Down
11 changes: 11 additions & 0 deletions docs/settings/dev-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,3 +29,14 @@ They are enabled by default.
| Set to `true` to enable the <<xpack-profiler,{searchprofiler}>>. Defaults to `true`.

|===

[float]
[[painless_lab-settings]]
==== Painless Lab settings

[cols="2*<"]
|===
| `xpack.painless_lab.enabled`
| When set to `true`, enables the <<painlesslab, Painless Lab>>. Defaults to `true`.

|===
4 changes: 2 additions & 2 deletions docs/user/alerting/defining-rules.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ Name:: The name of the rule. While this name does not have to be unique, th
Tags:: A list of tag names that can be applied to a rule. Tags can help you organize and find rules, because tags appear in the rule listing in the management UI which is searchable by tag.
Check every:: This value determines how frequently the rule conditions below are checked. Note that the timing of background rule checks are not guaranteed, particularly for intervals of less than 10 seconds. See <<alerting-production-considerations, Alerting production considerations>> for more information.
Notify:: This value limits how often actions are repeated when an alert remains active across rule checks. See <<alerting-concepts-suppressing-duplicate-notifications>> for more information. +
- **Only on status change**: Actions are not repeated when an alert remains active across checks. Actions run only when the rule status changes.
- **Every time rule is active**: Actions are repeated when an alert remains active across checks.
- **Only on status change**: Actions are not repeated when an alert remains active across checks. Actions run only when the alert status changes.
- **Every time alert is active**: Actions are repeated when an alert remains active across checks.
- **On a custom action interval**: Actions are suppressed for the throttle interval, but repeat when an alert remains active across checks for a duration longer than the throttle interval.


Expand Down
2 changes: 1 addition & 1 deletion docs/user/monitoring/kibana-alerts.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ by running checks on a schedule time of 1 minute with a re-notify interval of 6
This alert is triggered if a large (primary) shard size is found on any of the
specified index patterns. The trigger condition is met if an index's shard size is
55gb or higher in the last 5 minutes. The alert is grouped across all indices that match
the default patter of `*` by running checks on a schedule time of 1 minute with a re-notify
the default pattern of `*` by running checks on a schedule time of 1 minute with a re-notify
interval of 12 hours.

[discrete]
Expand Down
Binary file not shown.
Binary file modified docs/user/security/api-keys/images/api-keys.png
100755 → 100644
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
57 changes: 15 additions & 42 deletions docs/user/security/api-keys/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@


API keys enable you to create secondary credentials so that you can send
requests on behalf of the user. Secondary credentials have
requests on behalf of a user. Secondary credentials have
the same or lower access rights.

For example, if you extract data from an {es} cluster on a daily
Expand All @@ -14,8 +14,7 @@ and then put the API credentials into a cron job.
Or, you might create API keys to automate ingestion of new data from
remote sources, without a live user interaction.

You can create API keys from the {kib} Console. To view and invalidate
API keys, open the main menu, then click *Stack Management > API Keys*.
To manage API keys, open the main menu, then click *Stack Management > API Keys*.

[role="screenshot"]
image:user/security/api-keys/images/api-keys.png["API Keys UI"]
Expand Down Expand Up @@ -46,58 +45,32 @@ cluster privileges to use API keys in {kib}. To manage roles, open the main menu
[float]
[[create-api-key]]
=== Create an API key
You can {ref}/security-api-create-api-key.html[create an API key] from
the {kib} Console. This example shows how to create an API key
to authenticate to a <<api, Kibana API>>.

[source,js]
POST /_security/api_key
{
"name": "kibana_api_key"
}

This creates an API key with the
name `kibana_api_key`. API key
names must be globally unique.
An expiration date is optional and follows
{ref}/common-options.html#time-units[{es} time unit format].
When an expiration is not provided, the API key does not expire.

The response should look something like this:

[source,js]
{
"id" : "XFcbCnIBnbwqt2o79G4q",
"name" : "kibana_api_key",
"api_key" : "FD6P5UA4QCWlZZQhYF3YGw"
}

Now, you can use the API key to request {kib} roles. You'll need to send a request with a
`Authorization` header with a value having the prefix `ApiKey` followed by the credentials,
where credentials is the base64 encoding of `id` and `api_key` joined by a colon. For example:

[source,js]

To create an API key, open the main menu, then click *Stack Management > API Keys > Create API key*.

[role="screenshot"]
image:user/security/api-keys/images/create-api-key.png["Create API Key UI"]

Once created, you can copy the API key (Base64 encoded) and use it to send requests to {es} on your behalf. For example:

[source,bash]
curl --location --request GET 'http://localhost:5601/api/security/role' \
--header 'Content-Type: application/json;charset=UTF-8' \
--header 'kbn-xsrf: true' \
--header 'Authorization: ApiKey aVZlLUMzSUJuYndxdDJvN0k1bU46aGxlYUpNS2lTa2FKeVZua1FnY1VEdw==' \

[float]
[[view-api-keys]]
=== View and invalidate API keys
The *API Keys* feature in Kibana lists your API keys, including the name, date created,
and expiration date. If an API key expires, its status changes from `Active` to `Expired`.
=== View and delete API keys

The *API Keys* feature in Kibana lists your API keys, including the name, date created, and status. If an API key expires, its status changes from `Active` to `Expired`.

If you have `manage_security` or `manage_api_key` permissions,
you can view the API keys of all users, and see which API key was
created by which user in which realm.
If you have only the `manage_own_api_key` permission, you see only a list of your own keys.

You can invalidate API keys individually or in bulk.
Invalidated keys are deleted in batch after seven days.

[role="screenshot"]
image:user/security/api-keys/images/api-key-invalidate.png["API Keys invalidate"]
You can delete API keys individually or in bulk.

You cannot modify an API key. If you need additional privileges,
you must create a new key with the desired configuration and invalidate the old key.
1 change: 1 addition & 0 deletions packages/elastic-datemath/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ ts_config(

ts_project(
name = "tsc",
args = ['--pretty'],
srcs = SRCS,
deps = DEPS,
declaration = True,
Expand Down
Loading

0 comments on commit e401d6f

Please sign in to comment.