-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Google Cloud Storage: many context cancelled errors leading to runtime sealing #4915
Comments
Could you share more of the log before this line:
It would be good to know what's happening before the context is canceled. |
Pingaling @sethvargo |
Woops, didn't realize the logs before were relevant, here's a better dump:
Let me know if logs from other boxes in the same cluster would help as well. |
Hey @fastest963 thanks for reporting this. I'm looking into this, but I think it might not be unique to Google Cloud Storage. Have you used any other HA-supported backends? Do they exhibit this same error |
@sethvargo I haven't tried any other backends, no. I just assumed it was GCS related. |
Move cancelation to after cleanup Fixes GH-4915
We updated to v0.10.4 but just noticed this happened again. It hasn't happened except for this one occurrence so it seems to be much better than before. Here are the logs, let me know if there's anything else I can give:
|
I think you’ll want to investigate why leadership is being lost directly
after starting up, might be an issue with your config (could you post it?),
network, or a bug in the storage backend
…On Thu, Aug 2, 2018 at 1:57 PM James Hartig ***@***.***> wrote:
We updated to v0.10.4 but just noticed this happened again. It hasn't
happened except for this one occurrence so it seems to be much better than
before.
Here are the logs, let me know if there's anything else I can give:
2018-08-02T20:22:57.459Z [INFO ] core: acquired lock, enabling active operation
2018-08-02T20:22:59.035Z [INFO ] core: post-unseal setup starting
2018-08-02T20:22:59.384Z [INFO ] core: loaded wrapping token key
2018-08-02T20:22:59.384Z [INFO ] core: successfully setup plugin catalog: plugin-directory=
2018-08-02T20:23:00.032Z [INFO ] core: successfully mounted backend: type=kv path=secret/
2018-08-02T20:23:00.034Z [INFO ] core: successfully mounted backend: type=system path=sys/
2018-08-02T20:23:00.037Z [INFO ] core: successfully mounted backend: type=identity path=identity/
2018-08-02T20:23:00.038Z [INFO ] core: successfully mounted backend: type=pki path=pki/
2018-08-02T20:23:00.040Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2018-08-02T20:23:01.698Z [INFO ] core: restoring leases
2018-08-02T20:23:01.699Z [INFO ] rollback: starting rollback manager
2018-08-02T20:23:02.834Z [INFO ] identity: entities restored
2018-08-02T20:23:02.973Z [INFO ] identity: groups restored
2018-08-02T20:23:03.173Z [INFO ] core: post-unseal setup complete
2018-08-02T20:23:03.173Z [WARN ] core: leadership lost, stopping active operation
2018-08-02T20:23:03.174Z [INFO ] core: core/startClusterListener: starting listener: listener_address=0.0.0.0:8201
2018-08-02T20:23:03.174Z [INFO ] core: core/startClusterListener: serving cluster requests: cluster_listen_address=[::]:8201
2018-08-02T20:23:03.182Z [INFO ] core: pre-seal teardown starting
2018-08-02T20:23:03.183Z [INFO ] core: stopping cluster listeners
2018-08-02T20:23:03.183Z [INFO ] core: shutting down forwarding rpc listeners
2018-08-02T20:23:03.184Z [INFO ] core: forwarding rpc listeners stopped
2018-08-02T20:23:03.185Z [WARN ] expiration: context cancled while restoring leases, stopping lease loading
2018-08-02T20:23:03.675Z [INFO ] core: rpc listeners successfully shut down
2018-08-02T20:23:03.675Z [INFO ] core: cluster listeners successfully shut down
2018-08-02T20:23:03.675Z [INFO ] rollback: stopping rollback manager
2018-08-02T20:23:03.982Z [INFO ] core: pre-seal teardown complete
2018-08-02T20:23:14.693Z [INFO ] core: acquired lock, enabling active operation
2018-08-02T20:23:16.137Z [INFO ] core: post-unseal setup starting
2018-08-02T20:23:16.346Z [INFO ] core: loaded wrapping token key
2018-08-02T20:23:16.347Z [INFO ] core: successfully setup plugin catalog: plugin-directory=
2018-08-02T20:23:16.755Z [INFO ] core: successfully mounted backend: type=kv path=secret/
2018-08-02T20:23:16.755Z [INFO ] core: successfully mounted backend: type=system path=sys/
2018-08-02T20:23:16.757Z [INFO ] core: successfully mounted backend: type=identity path=identity/
2018-08-02T20:23:16.757Z [INFO ] core: successfully mounted backend: type=pki path=pki/
2018-08-02T20:23:16.757Z [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2018-08-02T20:23:17.960Z [INFO ] core: restoring leases
2018-08-02T20:23:17.961Z [INFO ] rollback: starting rollback manager
2018-08-02T20:23:18.743Z [INFO ] identity: entities restored
2018-08-02T20:23:18.933Z [INFO ] identity: groups restored
2018-08-02T20:23:19.133Z [INFO ] core: post-unseal setup complete
2018-08-02T20:23:19.134Z [INFO ] core: core/startClusterListener: starting listener: listener_address=0.0.0.0:8201
2018-08-02T20:23:19.134Z [INFO ] core: core/startClusterListener: serving cluster requests: cluster_listen_address=[::]:8201
2018-08-02T20:23:19.813Z [WARN ] core: leadership lost, stopping active operation
2018-08-02T20:23:19.813Z [INFO ] core: pre-seal teardown starting
2018-08-02T20:23:19.813Z [INFO ] core: stopping cluster listeners
2018-08-02T20:23:19.813Z [INFO ] core: shutting down forwarding rpc listeners
2018-08-02T20:23:19.813Z [INFO ] core: forwarding rpc listeners stopped
2018-08-02T20:23:19.856Z [ERROR] expiration: error restoring leases: error="failed to read lease entry: failed to read value for "sys/expire/id/auth/approle/login/614c329cb7826a
2018-08-02T20:23:19.858Z [ERROR] core: shutting down
2018-08-02T20:23:20.134Z [INFO ] core: rpc listeners successfully shut down
2018-08-02T20:23:20.134Z [INFO ] core: cluster listeners successfully shut down
2018-08-02T20:23:20.134Z [INFO ] rollback: stopping rollback manager
2018-08-02T20:23:20.135Z [INFO ] core: pre-seal teardown complete
2018-08-02T20:23:20.481Z [INFO ] core: vault is sealed
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#4915 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB08qe1x0LW5reOnlXIiVFxAtuxctDuAks5uM2fNgaJpZM4VNgbF>
.
|
I had assumed that it is failing just because Google Cloud Storage is flaky at times. Usually a server fails in each DC around the same time and another server picks up and there aren't any issues anymore. I'm fine with the occasional error or leadership change except that it causes vault to seal itself so then we could have a case where eventually all of our servers are sealed. |
Hi @fastest963 That's a Consul config, not a Vault config. Are you using two storage backends for Vault? Neither Google Cloud Storage nor the Vault storage backend for GCS should be flaky, ever. There are quite a few people (self included) who have been using it to run Vault for weeks without leadership transitions. Can you please share your vault config? |
Woops, was in the wrong mindset when getting that. Here's the vault config:
|
@fastest963 Thanks for that. Can you describe your deployment topology? Do you have all Vault's directly accessible? Are they behind a single load balancer? |
Hmm okay - I think there's a few things here (and maybe @briankassouf can chime in with more details), but generally you either want the nodes to be accessed directory or accessed from a load balancer (see: HA docs). It sounds like you're running Vault OSS. Remember that only one Vault server is ever servicing requests at a time. You have a global topology, so even if a user talks to the Vault server in their local datacenter, it's going to forward that request to the Vault server that is currently the leader. Also, this log appears truncated:
Can you find the rest of that line? That will tell us what the actual error is. |
@sethvargo Could clock skew between the servers cause this? |
@briankassouf the clock skew would have to be seconds in orders of magnitude (at least 5s) |
@fastest963 Would you mind checking if there is a large amount of clock skew between your vault servers? Particularly the one automatically taking over leadership on startup? |
@sethvargo here's that full line:
@briankassouf We have a service that runs on all of our boxes that tracks offsets between servers and our highest offset in the last 7 days was <100 milliseconds. |
Interesting, so it's the same context cancelation error as before. It looks like we check for "retErr contains context canceled", but for some reason that isn't catching this case. Not sure of errwrap's implementation here: Lines 308 to 323 in bb180b8
Even so, if the context is being canceled, I think it means that this server lost leadership while trying to restore leases (as-in, it was just elected as the leader). I think that Line 452 in bb180b8
/cc @briankassouf |
Yeah definitely two different issues, i'm not sure why that errwrap check isn't working. I created #5046 to track that bug. Regardless of that issue we still should figure out why leadership is being lost right after startup. @fastest963 Is the same issue occurring on all your nodes? |
@briankassouf It looks like the "error restoring leases" has only happened to one server once in the past 7 days, but the error "clearing leader advertisement failed" has happened 3 times across 3 different servers. All 4 of those times the error has been "context canceled". Let me know what info I can provide to help. |
@fastest963 can you just double-confirm that all those servers for which you're getting those logs are running Vault 0.10.4+ at the time of that message? We updated the call to use Line 804 in f7150ae
That is passed directly through to the GCS HA backend which just uses the given context directly: Lines 228 to 242 in f7150ae
|
@sethvargo Good catch. I upgraded all instances to v0.10.4 on 7/31 at 8:20pm EST and the errors happened between 2pm - 3pm EST on 7/31. Apologies for that. I filtered to |
@fastest963 okay phew. I was worried there for a second. @briankassouf it looks like we just need to #5046 then |
When running with Google Cloud Storage as the backend, instances routinely (once a day so far) run into some sort of "context cancelled" error and then seal themselves. This means we have to manually unseal a box a day or risk running out of unsealed instances. I'm not sure if the timeouts can be relaxed to prevent the errors in the first place, but at the very least the instance shouldn't seal itself, right?
Environment:
Vault Config File:
Startup Log Output:
Expected Behavior:
I expect context cancellation to be ignored/retried/prevented, but if there is an error the vault stay unsealed.
Actual Behavior:
Whenever there's an error the vault is sealed rendering the instance useless.
Steps to Reproduce:
Run vault instance with Google Cloud Storage for multiple days. Unfortunately, I don't have any better steps, we aren't really using vault yet for anything besides bootstrapping so the instances are just sitting there most of the time.
Important Factoids:
Seems more likely to happen in instances running in Europe and the GCS bucket is locked in North America.
The text was updated successfully, but these errors were encountered: