You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the user is required to set the Fleet Server host in Fleet Settings before installing Fleet Server. If they do not, the agent will get a policy with no valid hosts and it will no longer receive updates. I'm worried that not all users will read the instructions carefully. It'd be nice if we designed our fleet server to be more resilient and able to recover in this scenario.
The agent already has the ability to check whether a fleet server host is valid by checking a status endpoint. If the status endpoint does not return 200, it does not accept the policy and it returns an unhealthy status.
Can we do the same during bootstrapping to allow the user to set the Fleet server host after installing Fleet server? If its valid, then the agent will finish bootstrapping and check in successfully. If not, then keep checking ES on regular interval until it is. This allows the user to fill in the Fleet Server host later. We can also set the agent status as unhealthy to indicate that setup is not complete.
The text was updated successfully, but these errors were encountered:
I think you are speaking specifically of an Elastic Agent not being able to talk to its local (running on same box) even if the fleet.host is wrong in the policy?
If that is the case then this is already solved. The Elastic Agent knows when its running Fleet Server on its own host and will talk to it through localhost only. So in the case the Elastic Agent does not care what is set in Kibana. It is only on the Elastic Agents that are not running Fleet Server will it cause an issue.
Currently, the user is required to set the Fleet Server host in Fleet Settings before installing Fleet Server. If they do not, the agent will get a policy with no valid hosts and it will no longer receive updates. I'm worried that not all users will read the instructions carefully. It'd be nice if we designed our fleet server to be more resilient and able to recover in this scenario.
The agent already has the ability to check whether a fleet server host is valid by checking a status endpoint. If the status endpoint does not return 200, it does not accept the policy and it returns an unhealthy status.
Can we do the same during bootstrapping to allow the user to set the Fleet server host after installing Fleet server? If its valid, then the agent will finish bootstrapping and check in successfully. If not, then keep checking ES on regular interval until it is. This allows the user to fill in the Fleet Server host later. We can also set the agent status as unhealthy to indicate that setup is not complete.
The text was updated successfully, but these errors were encountered: