-
Notifications
You must be signed in to change notification settings - Fork 21
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
3 changed files
with
80 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
# Distributed Setups | ||
|
||
## High Availability | ||
|
||
High availability functionality is built directly into Icinga DB and | ||
can be deployed without additional third-party components. | ||
|
||
![Icinga DB HA](images/icingadb-ha.png) | ||
|
||
First, you need an Icinga 2 high availability setup with two master nodes, such as described | ||
[here](https://icinga.com/docs/icinga-2/latest/doc/06-distributed-monitoring#high-availability-master-with-agents). | ||
|
||
Each of the master nodes must have the Icinga DB feature enabled and | ||
have their own dedicated Redis server set up for it, so that each node writes the monitoring data separately. | ||
The setup steps per node are no different from a single node setup and can be found in the | ||
[Icinga 2 installation documentation](https://icinga.com/docs/icinga-2/latest/doc/02-installation). | ||
Each Redis server will always have the complete data available as long as | ||
its corresponding Icinga 2 master is running and writing to its Redis. | ||
This is because the Icinga 2 master nodes synchronize their data and events with each other as long as | ||
they are connected, and each takes over the full configuration in split-brain scenarios. | ||
|
||
For each Redis server you need to set up its own dedicated Icinga DB instance that connects to it, | ||
but the Icinga DB instances must write to the same database, which of course can be replicated or a cluster. | ||
So the steps from the standard | ||
[Icinga DB installation documentation](https://icinga.com/docs/icinga-db/latest/doc/02-installation) | ||
can be followed. However, as mentioned, the database only needs to be set up once. | ||
|
||
All in all, an Icinga DB HA environment involves setting up two Icinga 2 master nodes, two Redis servers, | ||
two Icinga DB instances and a database. | ||
|
||
Please read the note about the [environment ID](#environment-id), | ||
which is common to all Icinga DB components and generated by Icinga 2's Icinga DB feature. | ||
|
||
There is only one active Icinga DB instance at a time, | ||
which is responsible for performing database operations in the following areas: | ||
|
||
* Synchronizing configuration, also across Icinga 2 restarts. | ||
* Performing configuration runtime updates made via the Icinga 2 API. | ||
* Updating recent host and service states. | ||
* Flagging hosts and services that are overdue for updates. | ||
* Deleting history items that have exceeded their configured retention time. | ||
|
||
However, both Icinga DB instances write all events relevant to the history of hosts and services to the database. | ||
This way, no data is lost if an Icinga 2 master is unavailable for a period of time or if | ||
they are running in split-brain mode. | ||
|
||
Which Icinga DB instance is active is decided by the database. | ||
The instance that can perform a particular database operation first is considered responsible. | ||
In the case of concurrent operations, simply put, only one wins via a locking mechanism. | ||
Of course, this is only true if the environment is healthy. | ||
Icinga DB is not trying to be responsible if its corresponding Redis server is unavailable or | ||
Icinga 2 is not writing data to Redis. | ||
If Icinga 2 or Redis become unavailable for more than 60 seconds, | ||
Icinga DB releases responsibility so the other instance can take over. | ||
|
||
## Multiple Environments | ||
|
||
Icinga DB supports synchronization of monitoring data from multiple different Icinga environments into | ||
a single database. This allows Icinga DB Web to provide a centralized view of the data. | ||
Although everything is prepared in Icinga DB, there is no full support in Icinga DB Web yet. | ||
As soon as it is ready, the documentation will be adapted and the feature will be explained in more detail. | ||
|
||
![Icinga DB Envs](images/icingadb-envs.png) | ||
|
||
## Environment ID | ||
|
||
!!! important | ||
|
||
Icinga 2 generates a unique environment ID from its CA certificate when it is first started with the | ||
Icinga DB feature enabled. The ID is written to the file `/var/lib/icinga2/icingadb.env`. | ||
It is strictly recommended not to change this ID afterwards, as all data would be resynchronized and | ||
the old ones would remain in the database, resulting in duplicate data. As long as the file remains, | ||
Icinga 2 will not regenerate the environment ID. This is also true if the CA is changed to avoid duplicate data. | ||
Special care should be taken if you add or redeploy the master node(s) and | ||
as a result or over time the CA has changed, which would result in a new environment ID. | ||
For high-availability setups, it is a good idea to enable the Icinga DB feature on the secondary master after | ||
you have successfully connected from/to the primary master so that the certificates are set up properly. | ||
The secondary master will then generate the same environment ID since it is working with the same CA certificate. | ||
In any case make sure that the file `/var/lib/icinga2/icingadb.env` does not change over time and | ||
is the same on all Icinga 2 master nodes per environment. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.