You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is probably out of scope for Zincati proper, but you should be able to build on top of #540 once that's in. If this is in a cluster context, the lock-based strategy could also make sense. (Though for a cluster, it might be safer to have multiple Tang servers instead.)
Indeed, if there is a need for more complex gating of the finalization/reboot step, the proper way is to point the fleet_lock strategy towards a service that is aware of all the invariants that need to be checked/guaranteed (in this case, Tang liveness).
That isn't limited to cluster scenarios, it is also valid for single nodes (and the logic can be served on localhost if the underlying infra cannot host it).
Overall, please beware that what you are trying to do is not really a sound design. The Tang server can go down after a reboot is triggered but before the node boots again, bringing you back to the same situation.
Also, the node can really reboot at any time due to any other random factor other than Zincati, and the Tang server could be down at the time too.
If you are concerned about this scenario, consider making the Tang service highly-available (HA) or severing the dependency.
Feature Request
Confirm the OS is able hit the
/adv
endpoint of a specified Tang server before proceeding with an update.Desired Feature
Example Usage
Other Information
Currently if my Tang server is offline, and Zincati kicks on, the server will remain in an offline state until my Tang server is back online.
The text was updated successfully, but these errors were encountered: