Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster not starting up after recent upgrade #22638

Closed
arctica opened this issue Feb 13, 2018 · 1 comment
Closed

cluster not starting up after recent upgrade #22638

arctica opened this issue Feb 13, 2018 · 1 comment

Comments

@arctica
Copy link

arctica commented Feb 13, 2018

Tried to do a rolling upgrade to the latest master build but my nodes stopped joining each other. The console output includes:

* WARNING: The server appears to be unable to contact the other nodes in the cluster. Please try

None of the commandline parameters have changed, the nodes were shut down gracefully before the binary was upgraded.

The log file didn't show anything really. Starting the binary with --logtostderr revealed the following:

I180213 16:29:31.443300 1 cli/start.go:890  CockroachDB CCL v2.0-alpha.20180212-396-g2f73ab2 (linux amd64, built 2018/02/13 14:33:10, go1.9.3)
I180213 16:29:31.544552 1 server/config.go:313  available memory from cgroups (8.0 EiB) exceeds system memory 63 GiB, using system memory
I180213 16:29:31.544596 1 server/config.go:411  system total memory: 63 GiB
I180213 16:29:31.544676 1 server/config.go:413  server configuration:
max offset             500000000
cache size             13 GiB
SQL memory pool size   9.3 GiB
scan interval          10m0s
scan max idle time     200ms
event log enabled      true
I180213 16:29:31.544739 1 cli/start.go:772  process identity: uid 0 euid 0 gid 0 egid 0
I180213 16:29:31.544770 1 cli/start.go:459  starting cockroach node
I180213 16:29:31.546624 54 storage/engine/rocksdb.go:541  opening rocksdb instance at "/root/cockroach-data/cockroach-temp455763134"
I180213 16:29:31.598852 54 storage/engine/rocksdb.go:541  opening rocksdb instance at "/root/cockroach-data"
I180213 16:29:31.631352 54 server/config.go:519  [n?] 1 storage engine initialized
I180213 16:29:31.631393 54 server/config.go:522  [n?] RocksDB cache size: 13 GiB
I180213 16:29:31.631423 54 server/config.go:522  [n?] store 0: RocksDB, max size 0 B, max open file limit 10000
W180213 16:29:31.638229 54 gossip/gossip.go:1286  [n?] no incoming or outgoing connections
I180213 16:29:31.638327 54 server/server.go:973  [n?] sleeping for 460.312388ms to guarantee HLC monotonicity
I180213 16:29:31.641377 101 gossip/client.go:129  [n?] started gossip client to node1.example.com:26257
I180213 16:29:32.295785 54 storage/store.go:1307  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I180213 16:29:32.296754 54 server/node.go:504  [n1] initialized store [n1,s1]: disk (capacity=437 GiB, available=246 GiB, used=9.1 GiB, logicalBytes=33 GiB), ranges=571, leases=0, writes=0.00, bytesPerReplica={p10=0.00 p25=2061316.00 p50=20090524.00 p75=39993326.00 p90=254211925.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00}
I180213 16:29:32.296781 54 server/node.go:352  [n1] node ID 1 initialized
I180213 16:29:32.296874 54 gossip/gossip.go:332  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"master.example.com:26257" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:12 >
I180213 16:29:32.296949 54 storage/stores.go:331  [n1] read 2 node addresses from persistent storage
E180213 16:29:32.296985 476 storage/replica_range_lease.go:256  [n1,s1,r7/1:/Table/{SystemCon…-11}] heartbeat failed on epoch increment
I180213 16:29:32.297192 54 server/node.go:645  [n1] connecting to gossip network to verify cluster ID...
I180213 16:29:32.297217 54 server/node.go:670  [n1] node connected via gossip and verified as part of cluster "..."
I180213 16:29:32.297298 54 server/node.go:446  [n1] node=1: started with [<no-attributes>=/root/cockroach-data] engine(s) and attributes []
I180213 16:29:32.297453 54 server/server.go:1191  [n1] starting http server at 0.0.0.0:8080
I180213 16:29:32.297473 54 server/server.go:1192  [n1] starting grpc/postgres server at 0.0.0.0:26257
I180213 16:29:32.297496 54 server/server.go:1193  [n1] advertising CockroachDB node at master.example.com:26257
E180213 16:29:32.299654 586 storage/replica_range_lease.go:266  [n1,s1,r366/1:/System/tsd/cr.node.distsende…] not incrementing epoch on n3 because next leaseholder (n1) not live (err = <nil>)
E180213 16:29:32.299655 597 storage/replica_range_lease.go:266  [n1,s1,r485/1:/System/tsd/cr.node.{cl…-di…}] not incrementing epoch on n3 because next leaseholder (n1) not live (err = <nil>)
E180213 16:29:32.300221 591 storage/replica_range_lease.go:266  [n1,s1,r465/1:/System/tsd/cr.node.exec.late…] not incrementing epoch on n3 because next leaseholder (n1) not live (err = <nil>)
E180213 16:29:32.300361 396 storage/replica_range_lease.go:266  [n1,s1,r245/1:/System/tsd/cr.node.exec.…] not incrementing epoch on n3 because next leaseholder (n1) not live (err = <nil>)
E180213 16:29:32.300930 652 storage/replica_range_lease.go:266  [n1,s1,r551/1:/System/tsd/cr.node.gossip.…] not incrementing epoch on n3 because next leaseholder (n1) not live (err = <nil>)
E180213 16:29:32.300948 692 storage/replica_range_lease.go:266  [n1,s1,r428/1:/System/tsd/cr.node.gossip.in…] not incrementing epoch on n3 because next leaseholder (n1) not live (err = <nil>)
... tons and tons more ...
E180213 16:29:32.326884 1870 storage/replica_range_lease.go:266  [n1,s1,r50/1:/System/ts{d/cr.st…-e}] not incrementing epoch on n3 because next leaseholder (n1) not live (err = <nil>)
I180213 16:29:32.724235 3366 storage/raft_transport.go:459  [n1] raft transport stream to node 3 established
I180213 16:29:32.803546 643 storage/node_liveness.go:409  [n1,hb] heartbeat failed on epoch increment; retrying
E180213 16:29:32.804224 584 storage/replica_range_lease.go:256  [n1,s1,r481/1:/System/tsd/cr.node.gossip.by…] heartbeat failed on epoch increment
E180213 16:29:32.805012 647 storage/replica_range_lease.go:256  [n1,s1,r4/1:/System/{NodeLive…-tsd}] heartbeat failed on epoch increment
E180213 16:29:32.805949 616 storage/replica_range_lease.go:256  [n1,s1,r5/1:/System/tsd{-/cr.nod…}] heartbeat failed on epoch increment
... above line repeated thousands of times on and on ...

Excluding the above repeated entries via grep, I was also able to see these entries:

I180213 16:35:54.828445 15278 storage/replica_raftstorage.go:725  [n1,s1,r3/1:/System/NodeLiveness{-Max}] applying Raft snapshot at index 3813941 (id=b6250cbb, encoded size=38320, 1 rocksdb batches, 51 log entries)
W180213 16:35:54.837360 15278 storage/replica_raftstorage.go:618  [n1,s1,r3/1:/System/NodeLiveness{-Max}] no system config available, cannot determine range MaxBytes
I180213 16:35:54.837412 15278 storage/replica_raftstorage.go:731  [n1,s1,r3/1:/System/NodeLiveness{-Max}] applied Raft snapshot in 9ms [clear=2ms batch=0ms entries=0ms commit=6ms]
...
W180213 16:36:03.009496 296 server/node.go:753  [n1] [n1,s1]: unable to compute metrics: [n1,s1]: system config not yet available

Before the upgrade, the nodes were all running the following build:

build: CCL v2.0-alpha.20180129-502-gf45bcea @ 2018/02/08 06:54:54 (go1.9.3)

@tbg
Copy link
Member

tbg commented Feb 13, 2018

#22636. We landed a version-incompatible change; @nvanbenschoten is reverting it, so this should work again soon. I'll close this issue, but please feel free to comment on the other one (or reopen if you think you're experiencing something else)!

@tbg tbg closed this as completed Feb 13, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants