Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address invalid monitoring configuration that prevents Elasticsearch from starting #47249

Closed
wants to merge 1 commit into from

Conversation

jakelandis
Copy link
Contributor

@jakelandis jakelandis commented Sep 27, 2019

This implementation is sufficient becuase

  • It generically solves the cluster state application across all of
    the settings by catching the exception and logging it.

This implementation is not ideal becuase

  • It allow invalid configuration to be persisted to cluster state,
    with only a message to the log
  • Does not notify the user that the configuration maybe incorrect
    via the REST API.

To notify the the user that the configuration is incorrect via the
REST API and prevent persisting config to cluster state, one would
need to implement a validator via
clusterService.getClusterSettings().addAffixUpdateConsumer(HOST_SETTING , consumer, validator)

However, this is not done becuase

  • It requires calling initExporters to find any exceptions that may be found.
    • Calling initExporters is not feasible due to
      • It would require alot of work on cluster update (even if we
        refactored out the validation bits)
      • We don't have easy access to the set of settings that are currently
        being set, just easy access to the single setting. This is an affix
        setting with other highly correlated settings needed to determine
        correctness. The validator sees settings 1 by 1, not the full set
        of settings being set.
      • On node startup this validation is also run, so if by some means
        [1] invalid config got into cluster state the exception would be thrown
        to the cluster state applier, not the REST layer.
  • HOST_SETTINGS is not unique in it's behavior here. For example
    xpack.monitoring.exporters.foo.use_ingest will exibit the same behavior
    if foo has not been defined.

Fixes #47125

EDIT: removed an incorrect statement

* It generically solves the cluster state application across all of
the settings by catching the exception and logging it.

This implementation is not ideal becuase
* It allow invalid configuration to be persisted to cluster state,
with only a message to the log
* Does not notify the user that the configuration maybe incorrect
via the REST API.

To notify the the user that the configuration is incorrect via the
REST API and prevent persisting config to cluster state, one would
need to implement a validator via
` clusterService.getClusterSettings().addAffixUpdateConsumer(HOST_SETTING , consumer, validator)`

However, this is not done becuase
* It requires calling initExporters to find any exceptions that may be found.
	* Calling initExporters is not feasible due to
		* It would require alot of work on cluster update (even if we
		refactored out the validation bits)
		* We don't have easy access to the set of settings that are currently
		 being set, just easy access to the single setting. This is an affix
		 setting with other highly correlated settings needed to determine
		 correctness. The validator sees settings 1 by 1, not the full set
		 of settings being set.
		* On node startup this validation is also run, so if by some means
		[1] invalid config got into cluster state the exception would be thrown
		to the cluster state applier, not the REST layer.
* HOST_SETTINGS is not unique in it's behavior here. For example
`xpack.monitoring.exporters.foo.use_ingest` will exibit the same behavior
if `foo` has not been defined.

[1] not sure if this a bug or feature ... but when monitoring
(and i assume any other xpack plugin) is not enabled, you can still set the
settings, but the validators will not run, allowing cluster state to be applied
without any settings validation. This likely isn't a problem, until something
gets enabled to use that un-validated cluster state (and the state is incorrect).

```
GET _xpack
...
    "monitoring" : {
      "available" : true,
      "enabled" : false
    }
```
Fixes elastic#47125
@jakelandis
Copy link
Contributor Author

closing in favor of #47246

@jakelandis jakelandis closed this Sep 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Invalid http monitoring exporter settings throw exception when applying cluster state
1 participant