-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Kibana deployment docs #5347
Comments
Allowing writes from Kibana instances when using tribe nodes should also be addressed, please, as the bug trail I followed (#3114) ended here. The .kibana index needs to be writable to allow features like saving dashboards. I'm currently solving this by intercepting the .kibana endpoint with a proxy and sending that directly to a cluster, bypassing the tribe node. |
This sounds very helpful. Especially the HA stuff! @CVTJNII So even when I
|
@LucaWintergerst That is correct. However, if you set up a proxy to redirect the Kibana index to a dedicated "write" cluster then not only will saving dashboards work but no special steps around creating the index will be required, either. I did this by intercepting /.kibana with HAProxy. One key thing to note is to set the tribe nodes as a backup in your kibana backend config so that Kibana doesn't flap if something goes wrong with the write clients due to being able to read some of it's index data through the tribe nodes. |
@CVTJNII I managed to partially solve this problem by using puppet. Everytime I add a new instance, an elasticsearch index with the specific mapping is created and a docker container with kibana started. The only thing we are still struggling with is the shorten URL feature. |
I created the .kibana index by connecting it to a single cluster A (like @LucaWintergerst said). The .kibana isn't created in the tribe node so if the cluster A get down, the Kibana doesn't work anymore. |
I think your best bet at the moment would be to:
Result: If Cluster A goes down, you can still load kibana boards from cluster B until A is fixed again. I would discourage you from giving write access to cluster Bs Kibana index, as you'd have to snapshot the index the other way around then..So during your downtime you cant create new boards. Let me know if there are any better solutions. Keep in mind that this is a more simple solution. I can think of something better that would be a whole lot more complicated to set up. |
So what I'm currently doing is:
Redirecting both reads and writes to the A cluster client nodes allows Kibana to set up it's index normally, no special handling is required with this setup. We currently have the snapshot method @LucaWintergerst mentioned planned via curator but not set up yet. If you're doing this with HAProxy it will look something like this:
This is my config where es-client.domain is the A cluster client nodes (the hostnames are sanitized) and the tribe nodes are configured (configuration not shown) for multiple clusters. |
For Kibana minor version upgrades, here is the rough procedure that should work and result in no Kibana downtime for the manual installation of Kibana:
Procedure will be slightly different for packages. @LeeDr Could you double-check the steps and let me know if this works, for instance for 4.3.x to 4.5.x upgrades? We really should document this procedure in the Kibana docs, perhaps here? Also, do you have steps for package-based upgrade process? cc: @epixa |
Note that for step 6 above, if doing this on the same server (likely), the new Kibana will have to be configured to run on a different port than the existing instance to avoid conflicts. Which means that there needs to be some proxy in front to map to a different Kibana port from the past to ensure no downtime. If not, users will have to be instructed to access Kibana from a different port during the upgrade and then Step 8 will be to restart the new Kibana to have it run on the original port (i.e. downtime?) |
Closing out this issue because it doesn't have any recent activity. If you still feel this issue needs to be addressed, feel free to open it back up. |
Right now we only talk about HA between Kibana and multiple ES nodes using the client node on the Kibana side. We should expand the docs to talk about the following scenarios:
The text was updated successfully, but these errors were encountered: