You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running elastic-agent on Openshift getting an error: State changed to FAILED: Missed two check-ins - type: 'ERROR' - sub_type: 'FAILED':
2022-01-26T17:43:32.620Z ERROR status/reporter.go:236 Elastic Agent status changed to: 'error'
2022-01-26T17:43:32.620Z ERROR log/reporter.go:36 2022-01-26T17:43:32Z - message: Application: metricbeat--7.16.2--36643631373035623733363936343635[82f15d60-c134-4bda-a375-495a54fa512c]: State changed to FAILED: Missed two check-ins - type: 'ERROR' - sub_type: 'FAILED'
2022-01-26T17:43:32.706Z INFO application/periodic.go:101 No configuration change
2022-01-26T17:43:33.654Z INFO stateresolver/stateresolver.go:48 New State ID is D6-LaMf0
2022-01-26T17:43:33.654Z INFO stateresolver/stateresolver.go:49 Converging state requires execution of 3 step(s)
2022-01-26T17:43:41.869Z INFO operation/operator.go:284 operation 'operation-install' skipped for metricbeat.7.16.2
2022-01-26T17:43:41.869Z INFO operation/operator.go:284 operation 'operation-start' skipped for metricbeat.7.16.2
2022-01-26T17:43:42.623Z WARN status/reporter.go:236 Elastic Agent status changed to: 'degraded'
2022-01-26T17:43:42.623Z INFO log/reporter.go:40 2022-01-26T17:43:42Z - message: Application: metricbeat--7.16.2--36643631373035623733363936343635[82f15d60-c134-4bda-a375-495a54fa512c]: State changed to RESTARTING: - type: 'STATE' - sub_type: 'STARTING'
2022-01-26T17:43:42.624Z INFO log/reporter.go:40 2022-01-26T17:43:42Z - message: Application: metricbeat--7.16.2--36643631373035623733363936343635[82f15d60-c134-4bda-a375-495a54fa512c]: State changed to STARTING: Starting - type: 'STATE' - sub_type: 'STARTING'
2022-01-26T17:43:42.624Z INFO log/reporter.go:40 2022-01-26T17:43:42Z - message: Application: metricbeat--7.16.2--36643631373035623733363936343635[82f15d60-c134-4bda-a375-495a54fa512c]: State changed to RESTARTING: Restarting - type: 'STATE' - sub_type: 'STARTING'
The only errors I see in metricbeat logs:
2022-01-26T17:55:24.433Z ERROR module/wrapper.go:259 Error fetching data for metricset beat.stats: error making http request: Get "http://unix/stats": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
2022-01-26T17:55:24.433Z ERROR module/wrapper.go:259 Error fetching data for metricset beat.state: error making http request: Get "http://unix/state": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
2022-01-26T17:54:06.328Z ERROR module/wrapper.go:259 Error fetching data for metricset http.json: error making http request: Get "http://unix/stats": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2022-01-26T17:55:24.567Z ERROR module/wrapper.go:259 Error fetching data for metricset beat.stats: error making http request: Get "http://unix/stats": dial unix /usr/share/elastic-agent/state/data/tmp/default/metricbeat/metricbeat.sock: connect: connection refused
2022-01-26T17:55:24.574Z ERROR module/wrapper.go:259 Error fetching data for metricset http.json: error making http request: Get "http://unix/stats": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2022-01-26T17:55:24.575Z ERROR module/wrapper.go:259 Error fetching data for metricset beat.state: error making http request: Get "http://unix/state": dial unix /usr/share/elastic-agent/state/data/tmp/default/metricbeat/metricbeat.sock: connect: connection refused
2022-01-26T17:55:24.575Z ERROR module/wrapper.go:259 Error fetching data for metricset http.json: error making http request: Get "http://unix/stats": dial unix /usr/share/elastic-agent/state/data/tmp/default/metricbeat/metricbeat.sock: connect: connection refused
in my configuration I do not explicitly enable http.json, beat.state or beat.stats, I believe it might be coming from the agent monitoring, which is enabled:
oc --kubeconfig=${INSTALL_DIR}/auth/kubeconfig get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.9.15 True False 57m Cluster version is 4.9.15
Discuss Forum URL:
Steps to Reproduce:
The text was updated successfully, but these errors were encountered:
After increasing memory limits for elastic-agent and making sure that node, where elastic-agent is running, has enough resources don't see those errors anymore
When running elastic-agent on Openshift getting an error:
State changed to FAILED: Missed two check-ins - type: 'ERROR' - sub_type: 'FAILED'
:The only errors I see in metricbeat logs:
in my configuration I do not explicitly enable
http.json
,beat.state
orbeat.stats
, I believe it might be coming from the agent monitoring, which is enabled:For confirmed bugs, please report:
Openshift version
The text was updated successfully, but these errors were encountered: