Expose the high performance HTTP server embedded in Elasticsearch directly to the public, safely blocking any attempt to delete or modify your data.
In other words... no more proxies! Yay Ponies!
List of other supported Elasticsearch versions: releases tab. If you need a build for a specific ES version, just open an issue!
ES_VERSION=2.4.1
bin/plugin install https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin/releases/download/v1.10.0_es-v$ES_VERSION/elasticsearch-readonlyrest-v1.11.0_es-v$ES_VERSION.zip
Append either of these snippets to conf/elasticsearch.yml
Remember to enable SSL whenever you use HTTP basic auth or API keys so your credentials can't be stolen.
readonlyrest:
enable: true
ssl:
enable: true
keystore_file: "/elasticsearch/plugins/readonlyrest/keystore.jks"
keystore_pass: readonlyrest
key_pass: readonlyrest
readonlyrest:
enable: true
response_if_req_forbidden: Sorry, your request is forbidden.
access_control_rules:
- name: Accept all requests from localhost
type: allow
hosts: [127.0.0.1]
- name: Just certain indices, and read only
type: allow
actions: ["indices:data/read/*"]
indices: ["<no-index>", "product_catalogue-*"] # index aliases are taken in account!
The
<no-index>
is for matching those generic requests that don't actually involve an index (e.g. get cluster state). More about this in the wiki.
# remember to set the right CORS origin (or disable it, if you're brave). See https://github.com/elastic/kibana/issues/6719
http.cors.enabled: true
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
readonlyrest:
enable: true
response_if_req_forbidden: Forbidden by ReadonlyREST ES plugin
access_control_rules:
- name: "Logstash can write and create its own indices"
# auth_key is good for testing, but replace it with `auth_key_sha1`!
auth_key: logstash:logstash
type: allow
actions: ["indices:data/read/*","indices:data/write/*","indices:admin/template/*","indices:admin/create"]
indices: ["logstash-*", "<no-index>"]
- name: Kibana Server (we trust this server side component, full access granted via HTTP authentication)
# auth_key is good for testing, but replace it with `auth_key_sha1`!
auth_key: admin:passwd3
type: allow
- name: Developer (reads only logstash indices, but can create new charts/dashboards)
# auth_key is good for testing, but replace it with `auth_key_sha1`!
auth_key: dev:dev
type: allow
kibana_access: ro+
indices: ["<no-index>", ".kibana*", "logstash*", "default"]
Now activate authentication in Kibana server: let the Kibana daemon connect to ElasticSearch in privileged mode.
- edit the kibana configuration file:
kibana.yml
and add the following:
elasticsearch.username: "admin"
elasticsearch.password: "passwd3"
This is secure because the users connecting from their browsers will be asked to login separately anyways.
Now activate authenticatoin in Logstash: (follow the docs, it's very similar to Kibana!)
readonlyrest:
enable: true
response_if_req_forbidden: Forbidden by ReadonlyREST ES plugin
access_control_rules:
- name: Accept requests from users in group team1 on index1
type: allow
groups: ["team1"]
uri_re: ^/index1/.*
- name: Accept requests from users in group team2 on index2
type: allow
groups: ["team2"]
uri_re: ^/index2/.*
- name: Accept requests from users in groups team1 or team2 on index3
type: allow
groups: ["team1", "team2"]
uri_re: ^/index3/.*
users:
- username: alice
auth_key: alice:p455phrase
groups: ["team1"]
- username: bob
auth_key: bob:s3cr37
groups: ["team2", "team4"]
- username: claire
auth_key_sha1: 2bc37a406bd743e2b7a4cb33efc0c52bc2cb03f0 #claire:p455key
groups: ["team1", "team5"]
For other use cases and finer access control have a look at the full list of supported rules
Before going to production, read this.
When you want to restrict access to certain indices, in order to prevent the user from overriding the index which has been specified in the URL, add this setting to the config.yml file:
rest.action.multi.allow_explicit_index: false
The default value is true, but when set to false, Elasticsearch will reject requests that have an explicit index specified in the request body.
Plain text auth_key
is is great for testing, but remember to replace it with auth_key_sha1
!
Other security plugins are replacing the high performance, Netty based, embedded REST API of Elasticsearch with Tomcat, Jetty or other cumbersome XML based JEE madness.
This plugin instead is just a lightweight pure-Java filtering layer. Even the SSL layer is provided as an extra Netty transport handler.
Some suggest to spin up a new HTTP proxy (Varnish, NGNix, HAProxy) between ES and clients to prevent malicious access. This is a bad idea for two reasons:
- You're introducing more complexity in your architecture.
- Reasoning about security at HTTP level is risky, flaky and less granular than controlling access at the internal ElasticSearch protocol level.
The only clean way to do the access control is AFTER ElasticSearch has parsed the queries.
Just set a few rules with this plugin and confidently open it up to the external world.
Build your ACL from simple building blocks (rules) i.e.:
hosts
a list of origin IP addresses or subnets
api_keys
a list of api keys passed in via headerX-Api-Key
methods
a list of HTTP methodsaccept_x-forwarded-for_header
interpret theX-Forwarded-For
header as origin host (useful for AWS ELB and other reverse proxies)auth_key_sha1
HTTP Basic auth (credentials stored as hashed strings).uri_re
Match the URI path as a regex.
indices
indices (aliases and wildcards work)actions
list of ES actions (e.g. "cluster:" , "indices:data/write/", "indices:data/read*")
kibana_access
captures the read-only, read-only + new visualizations/dashboards, read-write use cases of Kibana.
This project was incepted in this StackOverflow thread.
Thanks Ivan Brusic for publishing this guide