You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This puts the cluster into a spin from which there is no simple escape:
[2019-09-25T09:35:09,587][WARN ][o.e.c.s.ClusterSettings ] [node-2] failed to apply settings
org.elasticsearch.common.settings.SettingsException: [xpack.monitoring.exporters.cloud_monitoring.host] invalid host: [https://myhost:9243/]
at org.elasticsearch.xpack.monitoring.exporter.http.HttpExporter.createHosts(HttpExporter.java:400) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.http.HttpExporter.createRestClient(HttpExporter.java:294) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.http.HttpExporter.<init>(HttpExporter.java:219) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.http.HttpExporter.<init>(HttpExporter.java:206) ~[?:?]
at org.elasticsearch.xpack.monitoring.Monitoring.lambda$createComponents$1(Monitoring.java:134) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.Exporters.initExporters(Exporters.java:136) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.Exporters.setExportersSetting(Exporters.java:71) ~[?:?]
at org.elasticsearch.common.settings.Setting$2.apply(Setting.java:659) ~[elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.settings.Setting$2.apply(Setting.java:632) ~[elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.settings.AbstractScopedSettings$SettingUpdater.lambda$updater$0(AbstractScopedSettings.java:610) ~[elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.settings.AbstractScopedSettings.applySettings(AbstractScopedSettings.java:191) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:460) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:165) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-7.3.0.jar:7.3.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-7.3.0.jar:7.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: java.lang.IllegalArgumentException: HttpHosts do not use paths [/]. see setRequestConfigCallback for proxies. value: [https://myhost:9243/]
at org.elasticsearch.xpack.monitoring.exporter.http.HttpHostBuilder.<init>(HttpHostBuilder.java:157) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.http.HttpHostBuilder.builder(HttpHostBuilder.java:98) ~[?:?]
at org.elasticsearch.xpack.monitoring.exporter.http.HttpExporter.createHosts(HttpExporter.java:398) ~[?:?]
... 19 more
As of #47090 we now assert that no exceptions are thrown during cluster state application, but we are missing a test that reproduces the problem we see here, as well as a fix.
The text was updated successfully, but these errors were encountered:
* It generically solves the cluster state application across all of
the settings by catching the exception and logging it.
This implementation is not ideal becuase
* It allow invalid configuration to be persisted to cluster state,
with only a message to the log
* Does not notify the user that the configuration maybe incorrect
via the REST API.
To notify the the user that the configuration is incorrect via the
REST API and prevent persisting config to cluster state, one would
need to implement a validator via
` clusterService.getClusterSettings().addAffixUpdateConsumer(HOST_SETTING , consumer, validator)`
However, this is not done becuase
* It requires calling initExporters to find any exceptions that may be found.
* Calling initExporters is not feasible due to
* It would require alot of work on cluster update (even if we
refactored out the validation bits)
* We don't have easy access to the set of settings that are currently
being set, just easy access to the single setting. This is an affix
setting with other highly correlated settings needed to determine
correctness. The validator sees settings 1 by 1, not the full set
of settings being set.
* On node startup this validation is also run, so if by some means
[1] invalid config got into cluster state the exception would be thrown
to the cluster state applier, not the REST layer.
* HOST_SETTINGS is not unique in it's behavior here. For example
`xpack.monitoring.exporters.foo.use_ingest` will exibit the same behavior
if `foo` has not been defined.
[1] not sure if this a bug or feature ... but when monitoring
(and i assume any other xpack plugin) is not enabled, you can still set the
settings, but the validators will not run, allowing cluster state to be applied
without any settings validation. This likely isn't a problem, until something
gets enabled to use that un-validated cluster state (and the state is incorrect).
```
GET _xpack
...
"monitoring" : {
"available" : true,
"enabled" : false
}
```
Fixeselastic#47125
The fundamental issue with #47038 is that an exception is thrown when applying an invalid setting:
This puts the cluster into a spin from which there is no simple escape:
As of #47090 we now assert that no exceptions are thrown during cluster state application, but we are missing a test that reproduces the problem we see here, as well as a fix.
The text was updated successfully, but these errors were encountered: