Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

teamcity: failed tests on master: testrace/TestBackupAsOfSystemTime #15036

Closed
cockroach-teamcity opened this issue Apr 18, 2017 · 2 comments
Closed
Assignees
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.

Comments

@cockroach-teamcity
Copy link
Member

The following tests appear to have failed:

#227297:

--- FAIL: testrace/TestBackupAsOfSystemTime (0.000s)
Test ended in panic.

------- Stdout: -------
W170418 15:11:49.834645 37520 server/status/runtime.go:111  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170418 15:11:49.838711 37520 server/config.go:403  1 storage engine initialized
I170418 15:11:49.839904 37520 server/node.go:444  [n?] store [n0,s0] not bootstrapped
I170418 15:11:49.891456 37520 server/node.go:379  [n?] **** cluster 78c37b47-e6eb-4912-bf41-c7f0ca0630d2 has been created
I170418 15:11:49.891593 37520 server/node.go:380  [n?] **** add additional nodes by specifying --join=127.0.0.1:44701
I170418 15:11:49.902329 37520 storage/store.go:1256  [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I170418 15:11:49.917238 37520 server/node.go:457  [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:1}
I170418 15:11:49.917475 37520 server/node.go:348  [n1] node ID 1 initialized
I170418 15:11:49.917690 37520 gossip/gossip.go:307  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:44701" > attrs:<> locality:<> 
I170418 15:11:49.919476 37908 storage/split_queue.go:92  [split,n1,s1,r1/1:/M{in-ax},@c423d1e600] splitting at key /System/""
I170418 15:11:49.922183 37520 storage/stores.go:296  [n1] read 0 node addresses from persistent storage
I170418 15:11:49.922914 37520 server/node.go:607  [n1] connecting to gossip network to verify cluster ID...
I170418 15:11:49.923071 37520 server/node.go:632  [n1] node connected via gossip and verified as part of cluster "78c37b47-e6eb-4912-bf41-c7f0ca0630d2"
I170418 15:11:49.926791 37520 server/node.go:395  [n1] node=1: started with [[]=] engine(s) and attributes []
I170418 15:11:49.928400 37908 storage/replica_command.go:2626  [split,n1,s1,r1/1:/M{in-ax},@c423d1e600] initiating a split of this range at key /System/"" [r2]
I170418 15:11:49.934137 37520 sql/executor.go:326  [n1] creating distSQLPlanner with address {tcp 127.0.0.1:44701}
I170418 15:11:49.948640 37520 server/server.go:701  [n1] starting https server at 127.0.0.1:60772
I170418 15:11:49.948742 37520 server/server.go:702  [n1] starting grpc/postgres server at 127.0.0.1:44701
I170418 15:11:49.948778 37520 server/server.go:703  [n1] advertising CockroachDB node at 127.0.0.1:44701
I170418 15:11:50.048760 37908 storage/split_queue.go:92  [split,n1,s1,r2/1:/{System/-Max},@c4205bcc00] splitting at key /System/tsd
I170418 15:11:50.049017 37908 storage/replica_command.go:2626  [split,n1,s1,r2/1:/{System/-Max},@c4205bcc00] initiating a split of this range at key /System/tsd [r3]
E170418 15:11:50.111809 37927 storage/replica_proposal.go:457  [n1,s1,r2/1:/{System/-Max},@c4205bcc00] could not load SystemConfig span: must retry later due to intent on SystemConfigSpan
I170418 15:11:50.119150 37908 storage/split_queue.go:92  [split,n1,s1,r3/1:/{System/tsd-Max},@c42284c600] splitting at key /System/"tse"
I170418 15:11:50.119534 37908 storage/replica_command.go:2626  [split,n1,s1,r3/1:/{System/tsd-Max},@c42284c600] initiating a split of this range at key /System/"tse" [r4]
E170418 15:11:50.180376 37939 storage/replica_proposal.go:457  [n1,s1,r3/1:/{System/tsd-Max},@c42284c600] could not load SystemConfig span: must retry later due to intent on SystemConfigSpan
I170418 15:11:50.204696 37908 storage/split_queue.go:92  [split,n1,s1,r4/1:/{System/tse-Max},@c421858900] splitting at key /Table/0/0
I170418 15:11:50.205519 37908 storage/replica_command.go:2626  [split,n1,s1,r4/1:/{System/tse-Max},@c421858900] initiating a split of this range at key /Table/0 [r5]
I170418 15:11:50.213350 37520 sql/event_log.go:101  [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
I170418 15:11:50.274508 37520 sql/lease.go:372  [n1] publish: descID=12 (eventlog) version=2 mtime=2017-04-18 15:11:50.274401123 +0000 UTC
I170418 15:11:50.292240 37908 storage/split_queue.go:92  [split,n1,s1,r5/1:/{Table/0-Max},@c422380600] splitting at key /Table/11/0
I170418 15:11:50.292451 37908 storage/replica_command.go:2626  [split,n1,s1,r5/1:/{Table/0-Max},@c422380600] initiating a split of this range at key /Table/11 [r6]
I170418 15:11:50.372162 37908 storage/split_queue.go:92  [split,n1,s1,r6/1:/{Table/11-Max},@c423dc9800] splitting at key /Table/12/0
I170418 15:11:50.372536 37908 storage/replica_command.go:2626  [split,n1,s1,r6/1:/{Table/11-Max},@c423dc9800] initiating a split of this range at key /Table/12 [r7]
I170418 15:11:50.413257 37520 server/server.go:811  [n1] done ensuring all necessary migrations have run
I170418 15:11:50.413398 37520 server/server.go:813  [n1] serving sql connections
I170418 15:11:50.437383 38230 sql/event_log.go:101  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:44701} Attrs: Locality:} ClusterID:78c37b47-e6eb-4912-bf41-c7f0ca0630d2 StartedAt:1492528309923170872 LastUp:1492528309923170872}
I170418 15:11:50.506683 37908 storage/split_queue.go:92  [split,n1,s1,r7/1:/{Table/12-Max},@c421e98c00] splitting at key /Table/13/0
I170418 15:11:50.506908 37908 storage/replica_command.go:2626  [split,n1,s1,r7/1:/{Table/12-Max},@c421e98c00] initiating a split of this range at key /Table/13 [r8]
I170418 15:11:50.629343 37908 storage/split_queue.go:92  [split,n1,s1,r8/1:/{Table/13-Max},@c4205bcf00] splitting at key /Table/14/0
I170418 15:11:50.629527 37908 storage/replica_command.go:2626  [split,n1,s1,r8/1:/{Table/13-Max},@c4205bcf00] initiating a split of this range at key /Table/14 [r9]
I170418 15:11:50.734059 37908 storage/split_queue.go:92  [split,n1,s1,r9/1:/{Table/14-Max},@c422418000] splitting at key /Table/15/0
I170418 15:11:50.734217 37908 storage/replica_command.go:2626  [split,n1,s1,r9/1:/{Table/14-Max},@c422418000] initiating a split of this range at key /Table/15 [r10]
W170418 15:11:51.036108 37520 gossip/gossip.go:1192  [n?] no incoming or outgoing connections
W170418 15:11:51.041881 37520 server/status/runtime.go:111  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170418 15:11:51.044813 37520 server/config.go:403  1 storage engine initialized
I170418 15:11:51.046322 37520 server/node.go:444  [n?] store [n0,s0] not bootstrapped
I170418 15:11:51.046450 37520 storage/stores.go:296  [n?] read 0 node addresses from persistent storage
I170418 15:11:51.046546 37520 server/node.go:607  [n?] connecting to gossip network to verify cluster ID...
I170418 15:11:51.145285 38388 gossip/client.go:131  [n?] started gossip client to 127.0.0.1:44701
I170418 15:11:51.147216 38396 gossip/server.go:234  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:33922}
I170418 15:11:51.150019 37520 server/node.go:632  [n?] node connected via gossip and verified as part of cluster "78c37b47-e6eb-4912-bf41-c7f0ca0630d2"
I170418 15:11:51.150815 38398 storage/stores.go:312  [n?] wrote 1 node addresses to persistent storage
I170418 15:11:51.158274 37520 kv/dist_sender.go:370  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170418 15:11:51.162560 37520 server/node.go:341  [n?] new node allocated ID 2
I170418 15:11:51.162840 37520 gossip/gossip.go:307  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:33922" > attrs:<> locality:<> 
I170418 15:11:51.163579 37520 server/node.go:395  [n2] node=2: started with [[]=] engine(s) and attributes []
I170418 15:11:51.165277 37520 sql/executor.go:326  [n2] creating distSQLPlanner with address {tcp 127.0.0.1:33922}
I170418 15:11:51.170822 38295 storage/stores.go:312  [n1] wrote 1 node addresses to persistent storage
I170418 15:11:51.250866 37520 server/server.go:701  [n2] starting https server at 127.0.0.1:54003
I170418 15:11:51.250997 37520 server/server.go:702  [n2] starting grpc/postgres server at 127.0.0.1:33922
I170418 15:11:51.251068 37520 server/server.go:703  [n2] advertising CockroachDB node at 127.0.0.1:33922
I170418 15:11:51.271082 38353 server/node.go:588  [n2] bootstrapped store [n2,s2]
I170418 15:11:51.280497 37520 server/server.go:811  [n2] done ensuring all necessary migrations have run
I170418 15:11:51.280651 37520 server/server.go:813  [n2] serving sql connections
I170418 15:11:51.282979 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r2/1:/System/{-tsd},@c4205bcc00] generated preemptive snapshot 2ee7979d at index 30
W170418 15:11:51.368992 37520 gossip/gossip.go:1192  [n?] no incoming or outgoing connections
I170418 15:11:51.371027 38298 sql/event_log.go:101  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:33922} Attrs: Locality:} ClusterID:78c37b47-e6eb-4912-bf41-c7f0ca0630d2 StartedAt:1492528311163347150 LastUp:1492528311163347150}
W170418 15:11:51.384883 37520 server/status/runtime.go:111  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170418 15:11:51.405263 37520 server/config.go:403  1 storage engine initialized
I170418 15:11:51.406473 37520 server/node.go:444  [n?] store [n0,s0] not bootstrapped
I170418 15:11:51.406568 37520 storage/stores.go:296  [n?] read 0 node addresses from persistent storage
I170418 15:11:51.406677 37520 server/node.go:607  [n?] connecting to gossip network to verify cluster ID...
I170418 15:11:51.430741 37993 storage/store.go:3332  [replicate,n1,s1,r2/1:/System/{-tsd},@c4205bcc00] streamed snapshot: kv pairs: 26, log entries: 20, 15ms
I170418 15:11:51.432744 38662 storage/replica_raftstorage.go:595  [n2,s2,r2/?:{-},@c420df0000] applying preemptive snapshot at index 30 (id=2ee7979d, encoded size=79937, 1 rocksdb batches, 20 log entries)
I170418 15:11:51.436298 38662 storage/replica_raftstorage.go:603  [n2,s2,r2/?:/System/{-tsd},@c420df0000] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170418 15:11:51.440869 37993 storage/replica_command.go:3521  [replicate,n1,s1,r2/1:/System/{-tsd},@c4205bcc00] change replicas (ADD_REPLICA {2 2 2}): read existing descriptor r2:/System/{-tsd} [r1(n1,s1), next=2]
I170418 15:11:51.461209 38655 storage/replica.go:2575  [n1,s1,r2/1:/System/{-tsd},@c4205bcc00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:51.466837 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r10/1:/{Table/15-Max},@c424259800] generated preemptive snapshot c9cdceea at index 11
I170418 15:11:51.476304 37993 storage/store.go:3332  [replicate,n1,s1,r10/1:/{Table/15-Max},@c424259800] streamed snapshot: kv pairs: 9, log entries: 1, 9ms
I170418 15:11:51.476980 38707 storage/raft_transport.go:436  [n2] raft transport stream to node 1 established
I170418 15:11:51.477771 38587 storage/replica_raftstorage.go:595  [n2,s2,r10/?:{-},@c420df1800] applying preemptive snapshot at index 11 (id=c9cdceea, encoded size=548, 1 rocksdb batches, 1 log entries)
I170418 15:11:51.478876 38587 storage/replica_raftstorage.go:603  [n2,s2,r10/?:/{Table/15-Max},@c420df1800] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170418 15:11:51.485060 37993 storage/replica_command.go:3521  [replicate,n1,s1,r10/1:/{Table/15-Max},@c424259800] change replicas (ADD_REPLICA {2 2 2}): read existing descriptor r10:/{Table/15-Max} [r1(n1,s1), next=2]
I170418 15:11:51.511020 38738 storage/replica.go:2575  [n1,s1,r10/1:/{Table/15-Max},@c424259800] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:51.515257 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r3/1:/System/ts{d-e},@c42284c600] generated preemptive snapshot b6d236e8 at index 21
I170418 15:11:51.519423 38659 gossip/client.go:131  [n?] started gossip client to 127.0.0.1:44701
I170418 15:11:51.520972 38686 gossip/server.go:234  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:60253}
I170418 15:11:51.524122 38757 storage/stores.go:312  [n?] wrote 1 node addresses to persistent storage
I170418 15:11:51.524819 38757 storage/stores.go:312  [n?] wrote 2 node addresses to persistent storage
I170418 15:11:51.526520 37520 server/node.go:632  [n?] node connected via gossip and verified as part of cluster "78c37b47-e6eb-4912-bf41-c7f0ca0630d2"
I170418 15:11:51.534241 37993 storage/store.go:3332  [replicate,n1,s1,r3/1:/System/ts{d-e},@c42284c600] streamed snapshot: kv pairs: 582, log entries: 11, 18ms
I170418 15:11:51.540515 38683 storage/replica_raftstorage.go:595  [n2,s2,r3/?:{-},@c423d1f200] applying preemptive snapshot at index 21 (id=b6d236e8, encoded size=90537, 1 rocksdb batches, 11 log entries)
I170418 15:11:51.544088 37520 kv/dist_sender.go:370  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170418 15:11:51.552646 37520 server/node.go:341  [n?] new node allocated ID 3
I170418 15:11:51.552941 37520 gossip/gossip.go:307  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:60253" > attrs:<> locality:<> 
I170418 15:11:51.553708 37520 server/node.go:395  [n3] node=3: started with [[]=] engine(s) and attributes []
I170418 15:11:51.556046 38629 storage/stores.go:312  [n1] wrote 2 node addresses to persistent storage
I170418 15:11:51.556664 37520 sql/executor.go:326  [n3] creating distSQLPlanner with address {tcp 127.0.0.1:60253}
I170418 15:11:51.560965 38805 storage/stores.go:312  [n2] wrote 2 node addresses to persistent storage
I170418 15:11:51.575335 38683 storage/replica_raftstorage.go:603  [n2,s2,r3/?:/System/ts{d-e},@c423d1f200] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=1ms commit=3ms]
I170418 15:11:51.581131 37520 server/server.go:701  [n3] starting https server at 127.0.0.1:57647
I170418 15:11:51.581294 37520 server/server.go:702  [n3] starting grpc/postgres server at 127.0.0.1:60253
I170418 15:11:51.581353 37520 server/server.go:703  [n3] advertising CockroachDB node at 127.0.0.1:60253
I170418 15:11:51.588417 37993 storage/replica_command.go:3521  [replicate,n1,s1,r3/1:/System/ts{d-e},@c42284c600] change replicas (ADD_REPLICA {2 2 2}): read existing descriptor r3:/System/ts{d-e} [r1(n1,s1), next=2]
I170418 15:11:51.593648 37520 server/server.go:811  [n3] done ensuring all necessary migrations have run
I170418 15:11:51.593804 37520 server/server.go:813  [n3] serving sql connections
I170418 15:11:51.701851 38733 server/node.go:588  [n3] bootstrapped store [n3,s3]
I170418 15:11:51.797192 38851 sql/event_log.go:101  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:60253} Attrs: Locality:} ClusterID:78c37b47-e6eb-4912-bf41-c7f0ca0630d2 StartedAt:1492528311553453506 LastUp:1492528311553453506}
I170418 15:11:51.817214 37993 storage/replica_command.go:3521  [replicate,n1,s1,r3/1:/System/ts{d-e},@c42284c600] change replicas (ADD_REPLICA {2 2 2}): read existing descriptor r3:/System/ts{d-e} [r1(n1,s1), next=2]
I170418 15:11:51.858015 38858 storage/replica.go:2575  [n1,s1,r3/1:/System/ts{d-e},@c42284c600] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:51.862484 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r8/1:/Table/1{3-4},@c4205bcf00] generated preemptive snapshot ca7d9ae9 at index 30
I170418 15:11:51.943162 37993 storage/store.go:3332  [replicate,n1,s1,r8/1:/Table/1{3-4},@c4205bcf00] streamed snapshot: kv pairs: 94, log entries: 20, 11ms
I170418 15:11:51.945034 38978 storage/replica_raftstorage.go:595  [n3,s3,r8/?:{-},@c4219e2600] applying preemptive snapshot at index 30 (id=ca7d9ae9, encoded size=28360, 1 rocksdb batches, 20 log entries)
I170418 15:11:51.946932 38978 storage/replica_raftstorage.go:603  [n3,s3,r8/?:/Table/1{3-4},@c4219e2600] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170418 15:11:51.950856 37993 storage/replica_command.go:3521  [replicate,n1,s1,r8/1:/Table/1{3-4},@c4205bcf00] change replicas (ADD_REPLICA {3 3 2}): read existing descriptor r8:/Table/1{3-4} [r1(n1,s1), next=2]
I170418 15:11:51.975848 38957 storage/replica.go:2575  [n1,s1,r8/1:/Table/1{3-4},@c4205bcf00] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170418 15:11:51.982040 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r9/1:/Table/1{4-5},@c422418000] generated preemptive snapshot 1abfad7a at index 21
I170418 15:11:51.986551 37909 storage/store.go:3332  [replicate,n1,s1,r9/1:/Table/1{4-5},@c422418000] streamed snapshot: kv pairs: 11, log entries: 11, 3ms
I170418 15:11:51.987892 38967 storage/replica_raftstorage.go:595  [n3,s3,r9/?:{-},@c422418300] applying preemptive snapshot at index 21 (id=1abfad7a, encoded size=6868, 1 rocksdb batches, 11 log entries)
I170418 15:11:51.989444 38967 storage/replica_raftstorage.go:603  [n3,s3,r9/?:/Table/1{4-5},@c422418300] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170418 15:11:51.992748 38961 storage/raft_transport.go:436  [n3] raft transport stream to node 1 established
I170418 15:11:51.993568 37909 storage/replica_command.go:3521  [replicate,n1,s1,r9/1:/Table/1{4-5},@c422418000] change replicas (ADD_REPLICA {3 3 2}): read existing descriptor r9:/Table/1{4-5} [r1(n1,s1), next=2]
I170418 15:11:52.043625 39044 storage/replica.go:2575  [n1,s1,r9/1:/Table/1{4-5},@c422418000] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170418 15:11:52.052246 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r1/1:/{Min-System/},@c423d1e600] generated preemptive snapshot 47d01450 at index 60
I170418 15:11:52.060930 39059 storage/replica_raftstorage.go:595  [n3,s3,r1/?:{-},@c421ca5b00] applying preemptive snapshot at index 60 (id=47d01450, encoded size=28807, 1 rocksdb batches, 50 log entries)
I170418 15:11:52.063399 39059 storage/replica_raftstorage.go:603  [n3,s3,r1/?:/{Min-System/},@c421ca5b00] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170418 15:11:52.064559 37993 storage/store.go:3332  [replicate,n1,s1,r1/1:/{Min-System/},@c423d1e600] streamed snapshot: kv pairs: 36, log entries: 50, 9ms
I170418 15:11:52.081042 37993 storage/replica_command.go:3521  [replicate,n1,s1,r1/1:/{Min-System/},@c423d1e600] change replicas (ADD_REPLICA {3 3 2}): read existing descriptor r1:/{Min-System/} [r1(n1,s1), next=2]
I170418 15:11:52.133877 39035 storage/replica.go:2575  [n1,s1,r1/1:/{Min-System/},@c423d1e600] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170418 15:11:52.144419 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r6/1:/Table/1{1-2},@c423dc9800] generated preemptive snapshot d98f0a17 at index 25
I170418 15:11:52.153321 37993 storage/store.go:3332  [replicate,n1,s1,r6/1:/Table/1{1-2},@c423dc9800] streamed snapshot: kv pairs: 11, log entries: 15, 8ms
I170418 15:11:52.156037 38973 storage/replica_raftstorage.go:595  [n2,s2,r6/?:{-},@c4200c5800] applying preemptive snapshot at index 25 (id=d98f0a17, encoded size=12942, 1 rocksdb batches, 15 log entries)
I170418 15:11:52.157757 38973 storage/replica_raftstorage.go:603  [n2,s2,r6/?:/Table/1{1-2},@c4200c5800] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170418 15:11:52.168687 37993 storage/replica_command.go:3521  [replicate,n1,s1,r6/1:/Table/1{1-2},@c423dc9800] change replicas (ADD_REPLICA {2 2 2}): read existing descriptor r6:/Table/1{1-2} [r1(n1,s1), next=2]
I170418 15:11:52.217665 38977 storage/replica.go:2575  [n1,s1,r6/1:/Table/1{1-2},@c423dc9800] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:52.222177 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r9/1:/Table/1{4-5},@c422418000] generated preemptive snapshot 99b6c3ab at index 24
I170418 15:11:52.225619 37993 storage/store.go:3332  [replicate,n1,s1,r9/1:/Table/1{4-5},@c422418000] streamed snapshot: kv pairs: 12, log entries: 14, 3ms
I170418 15:11:52.232903 39106 storage/replica_raftstorage.go:595  [n2,s2,r9/?:{-},@c4249d2c00] applying preemptive snapshot at index 24 (id=99b6c3ab, encoded size=8941, 1 rocksdb batches, 14 log entries)
I170418 15:11:52.234351 39106 storage/replica_raftstorage.go:603  [n2,s2,r9/?:/Table/1{4-5},@c4249d2c00] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170418 15:11:52.239032 37993 storage/replica_command.go:3521  [replicate,n1,s1,r9/1:/Table/1{4-5},@c422418000] change replicas (ADD_REPLICA {2 2 3}): read existing descriptor r9:/Table/1{4-5} [r1(n1,s1), r2(n3,s3), next=3]
I170418 15:11:52.273372 39138 storage/replica.go:2575  [n1,s1,r9/1:/Table/1{4-5},@c422418000] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170418 15:11:52.286879 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r5/1:/Table/{0-11},@c422380600] generated preemptive snapshot 07cd81a5 at index 25
I170418 15:11:52.295119 37993 storage/store.go:3332  [replicate,n1,s1,r5/1:/Table/{0-11},@c422380600] streamed snapshot: kv pairs: 39, log entries: 15, 7ms
I170418 15:11:52.299423 39093 storage/replica_raftstorage.go:595  [n3,s3,r5/?:{-},@c420bfa000] applying preemptive snapshot at index 25 (id=07cd81a5, encoded size=17915, 1 rocksdb batches, 15 log entries)
I170418 15:11:52.303747 39093 storage/replica_raftstorage.go:603  [n3,s3,r5/?:/Table/{0-11},@c420bfa000] applied preemptive snapshot in 4ms [clear=1ms batch=0ms entries=0ms commit=0ms]
I170418 15:11:52.309352 37993 storage/replica_command.go:3521  [replicate,n1,s1,r5/1:/Table/{0-11},@c422380600] change replicas (ADD_REPLICA {3 3 2}): read existing descriptor r5:/Table/{0-11} [r1(n1,s1), next=2]
I170418 15:11:52.348217 39130 storage/replica.go:2575  [n1,s1,r5/1:/Table/{0-11},@c422380600] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170418 15:11:52.354214 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r7/1:/Table/1{2-3},@c421e98c00] generated preemptive snapshot 90ea9426 at index 27
I170418 15:11:52.359796 37993 storage/store.go:3332  [replicate,n1,s1,r7/1:/Table/1{2-3},@c421e98c00] streamed snapshot: kv pairs: 31, log entries: 17, 5ms
I170418 15:11:52.363099 39085 storage/replica_raftstorage.go:595  [n3,s3,r7/?:{-},@c424240000] applying preemptive snapshot at index 27 (id=90ea9426, encoded size=17581, 1 rocksdb batches, 17 log entries)
I170418 15:11:52.388299 39085 storage/replica_raftstorage.go:603  [n3,s3,r7/?:/Table/1{2-3},@c424240000] applied preemptive snapshot in 22ms [clear=0ms batch=0ms entries=7ms commit=0ms]
I170418 15:11:52.406744 37993 storage/replica_command.go:3521  [replicate,n1,s1,r7/1:/Table/1{2-3},@c421e98c00] change replicas (ADD_REPLICA {3 3 2}): read existing descriptor r7:/Table/1{2-3} [r1(n1,s1), next=2]
I170418 15:11:52.433152 39186 storage/replica.go:2575  [n1,s1,r7/1:/Table/1{2-3},@c421e98c00] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170418 15:11:52.436727 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r6/1:/Table/1{1-2},@c423dc9800] generated preemptive snapshot e3ca4ff8 at index 28
I170418 15:11:52.442039 37909 storage/store.go:3332  [replicate,n1,s1,r6/1:/Table/1{1-2},@c423dc9800] streamed snapshot: kv pairs: 12, log entries: 18, 3ms
I170418 15:11:52.444746 39114 storage/replica_raftstorage.go:595  [n3,s3,r6/?:{-},@c4246f8900] applying preemptive snapshot at index 28 (id=e3ca4ff8, encoded size=15015, 1 rocksdb batches, 18 log entries)
I170418 15:11:52.446485 39114 storage/replica_raftstorage.go:603  [n3,s3,r6/?:/Table/1{1-2},@c4246f8900] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170418 15:11:52.450236 37909 storage/replica_command.go:3521  [replicate,n1,s1,r6/1:/Table/1{1-2},@c423dc9800] change replicas (ADD_REPLICA {3 3 3}): read existing descriptor r6:/Table/1{1-2} [r1(n1,s1), r2(n2,s2), next=3]
I170418 15:11:52.490692 39134 storage/replica.go:2575  [n1,s1,r6/1:/Table/1{1-2},@c423dc9800] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170418 15:11:52.546281 37993 storage/replica_raftstorage.go:415  [replicate,n1,s1,r4/1:/{System/tse-Table/0},@c421858900] generated preemptive snapshot 102556b7 at index 21
I170418 15:11:52.558205 37993 storage/store.go:3332  [replicate,n1,s1,r4/1:/{System/tse-Table/0},@c421858900] streamed snapshot: kv pairs: 10, log entries: 11, 4ms
I170418 15:11:52.559637 39116 storage/replica_raftstorage.go:595  [n2,s2,r4/?:{-},@c424468900] applying preemptive snapshot at index 21 (id=102556b7, encoded size=8722, 1 rocksdb batches, 11 log entries)
I170418 15:11:52.561028 39116 storage/replica_raftstorage.go:603  [n2,s2,r4/?:/{System/tse-Table/0},@c424468900] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170418 15:11:52.564107 37993 storage/replica_command.go:3521  [replicate,n1,s1,r4/1:/{System/tse-Table/0},@c421858900] change replicas (ADD_REPLICA {2 2 2}): read existing descriptor r4:/{System/tse-Table/0} [r1(n1,s1), next=2]
I170418 15:11:52.605274 39192 storage/replica.go:2575  [n1,s1,r4/1:/{System/tse-Table/0},@c421858900] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:52.607932 37993 storage/queue.go:699  [n1,replicate] purgatory is now empty
I170418 15:11:52.622038 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r7/1:/Table/1{2-3},@c421e98c00] generated preemptive snapshot 1ef77fb0 at index 30
I170418 15:11:52.629992 37909 storage/store.go:3332  [replicate,n1,s1,r7/1:/Table/1{2-3},@c421e98c00] streamed snapshot: kv pairs: 32, log entries: 20, 7ms
I170418 15:11:52.632260 39235 storage/replica_raftstorage.go:595  [n2,s2,r7/?:{-},@c421846f00] applying preemptive snapshot at index 30 (id=1ef77fb0, encoded size=19654, 1 rocksdb batches, 20 log entries)
I170418 15:11:52.635293 39235 storage/replica_raftstorage.go:603  [n2,s2,r7/?:/Table/1{2-3},@c421846f00] applied preemptive snapshot in 3ms [clear=1ms batch=0ms entries=1ms commit=0ms]
I170418 15:11:52.638610 37909 storage/replica_command.go:3521  [replicate,n1,s1,r7/1:/Table/1{2-3},@c421e98c00] change replicas (ADD_REPLICA {2 2 3}): read existing descriptor r7:/Table/1{2-3} [r1(n1,s1), r2(n3,s3), next=3]
I170418 15:11:52.682393 39207 storage/replica.go:2575  [n1,s1,r7/1:/Table/1{2-3},@c421e98c00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170418 15:11:52.690792 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r4/1:/{System/tse-Table/0},@c421858900] generated preemptive snapshot 02be51b1 at index 27
I170418 15:11:52.695324 37909 storage/store.go:3332  [replicate,n1,s1,r4/1:/{System/tse-Table/0},@c421858900] streamed snapshot: kv pairs: 13, log entries: 17, 4ms
I170418 15:11:52.698519 39257 storage/replica_raftstorage.go:595  [n3,s3,r4/?:{-},@c421cae300] applying preemptive snapshot at index 27 (id=02be51b1, encoded size=12362, 1 rocksdb batches, 17 log entries)
I170418 15:11:52.700075 39257 storage/replica_raftstorage.go:603  [n3,s3,r4/?:/{System/tse-Table/0},@c421cae300] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170418 15:11:52.708955 37909 storage/replica_command.go:3521  [replicate,n1,s1,r4/1:/{System/tse-Table/0},@c421858900] change replicas (ADD_REPLICA {3 3 3}): read existing descriptor r4:/{System/tse-Table/0} [r1(n1,s1), r2(n2,s2), next=3]
I170418 15:11:52.847090 39215 storage/replica.go:2575  [n1,s1,r4/1:/{System/tse-Table/0},@c421858900] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170418 15:11:52.873353 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r2/1:/System/{-tsd},@c4205bcc00] generated preemptive snapshot e0d8ad1a at index 40
I170418 15:11:52.898864 37909 storage/store.go:3332  [replicate,n1,s1,r2/1:/System/{-tsd},@c4205bcc00] streamed snapshot: kv pairs: 32, log entries: 30, 24ms
I170418 15:11:52.906978 39072 storage/replica_raftstorage.go:595  [n3,s3,r2/?:{-},@c42284d200] applying preemptive snapshot at index 40 (id=e0d8ad1a, encoded size=154493, 1 rocksdb batches, 30 log entries)
I170418 15:11:52.910106 39072 storage/replica_raftstorage.go:603  [n3,s3,r2/?:/System/{-tsd},@c42284d200] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=1ms]
I170418 15:11:52.914362 37909 storage/replica_command.go:3521  [replicate,n1,s1,r2/1:/System/{-tsd},@c4205bcc00] change replicas (ADD_REPLICA {3 3 3}): read existing descriptor r2:/System/{-tsd} [r1(n1,s1), r2(n2,s2), next=3]
I170418 15:11:52.944304 39288 storage/replica.go:2575  [n1,s1,r2/1:/System/{-tsd},@c4205bcc00] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170418 15:11:52.951031 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r8/1:/Table/1{3-4},@c4205bcf00] generated preemptive snapshot c124e601 at index 58
I170418 15:11:52.958911 37909 storage/store.go:3332  [replicate,n1,s1,r8/1:/Table/1{3-4},@c4205bcf00] streamed snapshot: kv pairs: 146, log entries: 48, 7ms
I170418 15:11:52.962930 39290 storage/replica_raftstorage.go:595  [n2,s2,r8/?:{-},@c421847200] applying preemptive snapshot at index 58 (id=c124e601, encoded size=59888, 1 rocksdb batches, 48 log entries)
I170418 15:11:52.977932 39290 storage/replica_raftstorage.go:603  [n2,s2,r8/?:/Table/1{3-4},@c421847200] applied preemptive snapshot in 15ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I170418 15:11:52.983579 37909 storage/replica_command.go:3521  [replicate,n1,s1,r8/1:/Table/1{3-4},@c4205bcf00] change replicas (ADD_REPLICA {2 2 3}): read existing descriptor r8:/Table/1{3-4} [r1(n1,s1), r2(n3,s3), next=3]
I170418 15:11:53.015145 39263 storage/replica.go:2575  [n1,s1,r8/1:/Table/1{3-4},@c4205bcf00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170418 15:11:53.024189 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r3/1:/System/ts{d-e},@c42284c600] generated preemptive snapshot f5c4900e at index 27
I170418 15:11:53.057111 37909 storage/store.go:3332  [replicate,n1,s1,r3/1:/System/ts{d-e},@c42284c600] streamed snapshot: kv pairs: 800, log entries: 17, 32ms
I170418 15:11:53.061908 39226 storage/replica_raftstorage.go:595  [n3,s3,r3/?:{-},@c424468300] applying preemptive snapshot at index 27 (id=f5c4900e, encoded size=140345, 1 rocksdb batches, 17 log entries)
I170418 15:11:53.068978 39226 storage/replica_raftstorage.go:603  [n3,s3,r3/?:/System/ts{d-e},@c424468300] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=1ms]
I170418 15:11:53.074229 37909 storage/replica_command.go:3521  [replicate,n1,s1,r3/1:/System/ts{d-e},@c42284c600] change replicas (ADD_REPLICA {3 3 3}): read existing descriptor r3:/System/ts{d-e} [r1(n1,s1), r2(n2,s2), next=3]
I170418 15:11:53.145368 39318 storage/replica.go:2575  [n1,s1,r3/1:/System/ts{d-e},@c42284c600] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170418 15:11:53.153072 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r10/1:/{Table/15-Max},@c424259800] generated preemptive snapshot 9e4ac541 at index 16
I170418 15:11:53.158300 39303 storage/replica_raftstorage.go:595  [n3,s3,r10/?:{-},@c4249f1b00] applying preemptive snapshot at index 16 (id=9e4ac541, encoded size=3584, 1 rocksdb batches, 6 log entries)
I170418 15:11:53.159575 39303 storage/replica_raftstorage.go:603  [n3,s3,r10/?:/{Table/15-Max},@c4249f1b00] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170418 15:11:53.166763 37909 storage/store.go:3332  [replicate,n1,s1,r10/1:/{Table/15-Max},@c424259800] streamed snapshot: kv pairs: 11, log entries: 6, 13ms
I170418 15:11:53.168741 37909 storage/replica_command.go:3521  [replicate,n1,s1,r10/1:/{Table/15-Max},@c424259800] change replicas (ADD_REPLICA {3 3 3}): read existing descriptor r10:/{Table/15-Max} [r1(n1,s1), r2(n2,s2), next=3]
I170418 15:11:53.222513 39307 storage/replica.go:2575  [n1,s1,r10/1:/{Table/15-Max},@c424259800] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:3 StoreID:3 ReplicaID:3}]
I170418 15:11:53.231247 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r1/1:/{Min-System/},@c423d1e600] generated preemptive snapshot 6ce6050c at index 86
I170418 15:11:53.255358 37909 storage/store.go:3332  [replicate,n1,s1,r1/1:/{Min-System/},@c423d1e600] streamed snapshot: kv pairs: 52, log entries: 76, 12ms
I170418 15:11:53.258541 39383 storage/replica_raftstorage.go:595  [n2,s2,r1/?:{-},@c4220e4c00] applying preemptive snapshot at index 86 (id=6ce6050c, encoded size=42758, 1 rocksdb batches, 76 log entries)
I170418 15:11:53.261928 39383 storage/replica_raftstorage.go:603  [n2,s2,r1/?:/{Min-System/},@c4220e4c00] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I170418 15:11:53.265418 37909 storage/replica_command.go:3521  [replicate,n1,s1,r1/1:/{Min-System/},@c423d1e600] change replicas (ADD_REPLICA {2 2 3}): read existing descriptor r1:/{Min-System/} [r1(n1,s1), r2(n3,s3), next=3]
I170418 15:11:53.290022 39350 storage/replica.go:2575  [n1,s1,r1/1:/{Min-System/},@c423d1e600] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170418 15:11:53.297195 37909 storage/replica_raftstorage.go:415  [replicate,n1,s1,r5/1:/Table/{0-11},@c422380600] generated preemptive snapshot 7cdd8206 at index 28
I170418 15:11:53.323900 37909 storage/store.go:3332  [replicate,n1,s1,r5/1:/Table/{0-11},@c422380600] streamed snapshot: kv pairs: 40, log entries: 18, 26ms
I170418 15:11:53.325412 39392 storage/replica_raftstorage.go:595  [n2,s2,r5/?:{-},@c421dd3b00] applying preemptive snapshot at index 28 (id=7cdd8206, encoded size=19988, 1 rocksdb batches, 18 log entries)
I170418 15:11:53.327151 39392 storage/replica_raftstorage.go:603  [n2,s2,r5/?:/Table/{0-11},@c421dd3b00] applied preemptive snapshot in 2ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170418 15:11:53.332159 37909 storage/replica_command.go:3521  [replicate,n1,s1,r5/1:/Table/{0-11},@c422380600] change replicas (ADD_REPLICA {2 2 3}): read existing descriptor r5:/Table/{0-11} [r1(n1,s1), r2(n3,s3), next=3]
I170418 15:11:53.369888 39427 storage/replica.go:2575  [n1,s1,r5/1:/Table/{0-11},@c422380600] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:2 StoreID:2 ReplicaID:3}]
I170418 15:11:53.989842 37908 storage/split_queue.go:92  [split,n1,s1,r10/1:/{Table/15-Max},@c424259800] splitting at key /Table/50/0
I170418 15:11:53.990049 37908 storage/replica_command.go:2626  [split,n1,s1,r10/1:/{Table/15-Max},@c424259800] initiating a split of this range at key /Table/50 [r11]
I170418 15:11:53.990725 39376 sql/event_log.go:101  [client=127.0.0.1:49575,user=root,n1] Event: "create_database", target: 50, info: {DatabaseName:bench Statement:CREATE DATABASE bench User:root}
I170418 15:11:54.167519 37908 storage/split_queue.go:92  [split,n1,s1,r11/1:/{Table/50-Max},@c420bfb800] splitting at key /Table/51/0
I170418 15:11:54.181916 37908 storage/replica_command.go:2626  [split,n1,s1,r11/1:/{Table/50-Max},@c420bfb800] initiating a split of this range at key /Table/51 [r12]
I170418 15:11:54.182422 39376 sql/event_log.go:101  [client=127.0.0.1:49575,user=root,n1] Event: "create_table", target: 51, info: {TableName:bench.bank Statement:CREATE TABLE bench.bank (id INT PRIMARY KEY, balance INT, payload STRING, FAMILY (id, balance, payload)) User:root}
I170418 15:11:54.438046 39376 sql/mon/mem_usage.go:509  [client=127.0.0.1:49575,user=root,n1] txn: memory usage increases to 49 KiB (+49680)
I170418 15:11:54.438606 39376 sql/mon/mem_usage.go:509  [client=127.0.0.1:49575,user=root,n1] txn: memory usage increases to 64 KiB (+140)
I170418 15:11:54.442081 39376 sql/mon/mem_usage.go:509  [client=127.0.0.1:49575,user=root,n1] txn: memory usage increases to 128 KiB (+140)
I170418 15:11:54.844985 39594 storage/replica_command.go:2626  [n1,s1,r12/1:/{Table/51-Max},@c420e01800] initiating a split of this range at key /Table/51/1/100 [r13]
I170418 15:11:54.938778 39511 storage/replica_command.go:2626  [n1,s1,r13/1:/{Table/51/1/1���-Max},@c424258f00] initiating a split of this range at key /Table/51/1/200 [r14]
I170418 15:11:55.020292 39568 storage/replica_command.go:2626  [n1,s1,r14/1:/{Table/51/1/2���-Max},@c4219e3500] initiating a split of this range at key /Table/51/1/300 [r15]
I170418 15:11:55.125360 39582 storage/replica_command.go:2626  [n1,s1,r15/1:/{Table/51/1/3���-Max},@c420a74600] initiating a split of this range at key /Table/51/1/400 [r16]
I170418 15:11:55.228367 39641 storage/replica_command.go:2626  [n1,s1,r16/1:/{Table/51/1/4���-Max},@c42454ec00] initiating a split of this range at key /Table/51/1/500 [r17]
I170418 15:11:55.297165 39661 storage/replica_command.go:2626  [n1,s1,r17/1:/{Table/51/1/5���-Max},@c422380000] initiating a split of this range at key /Table/51/1/600 [r18]
I170418 15:11:55.376104 39647 storage/replica_command.go:2626  [n1,s1,r18/1:/{Table/51/1/6���-Max},@c421d15b00] initiating a split of this range at key /Table/51/1/700 [r19]
I170418 15:11:55.500165 39807 storage/replica_command.go:2626  [n1,s1,r19/1:/{Table/51/1/7���-Max},@c420e01200] initiating a split of this range at key /Table/51/1/800 [r20]
I170418 15:11:55.526128 37912 storage/replica_raftstorage.go:415  [raftsnapshot,n1,s1,r19/1:/{Table/51/1/7���-Max},@c420e01200] generated Raft snapshot 1dbcef2c at index 10
E170418 15:11:55.530866 37912 storage/queue.go:634  [raftsnapshot,n1,s1,r19/1:/{Table/51/1/7���-Max},@c420e01200] snapshot failed: r19: remote couldn't accept snapshot with error: [n3,s3],r19: cannot apply snapshot: snapshot intersects existing range [n3,s3,r18/3:/{Table/51/1/6���-Max}]
I170418 15:11:55.588052 39835 storage/replica_command.go:2626  [n1,s1,r20/1:/{Table/51/1/8���-Max},@c4219dac00] initiating a split of this range at key /Table/51/1/900 [r21]
I170418 15:11:55.610481 37912 storage/replica_raftstorage.go:415  [raftsnapshot,n1,s1,r20/1:/{Table/51/1/8���-Max},@c4219dac00] generated Raft snapshot c131efd6 at index 11
E170418 15:11:55.632342 37912 storage/queue.go:634  [raftsnapshot,n1,s1,r20/1:/{Table/51/1/8���-Max},@c4219dac00] snapshot failed: r20: remote couldn't accept snapshot with error: [n3,s3],r20: cannot apply snapshot: snapshot intersects existing range [n3,s3,r18/3:/{Table/51/1/6���-Max}]
I170418 15:11:55.846516 37912 storage/replica_raftstorage.go:415  [raftsnapshot,n1,s1,r21/1:/{Table/51/1/9���-Max},@c421e76900] generated Raft snapshot d2f5b65e at index 11
E170418 15:11:55.850840 37912 storage/queue.go:634  [raftsnapshot,n1,s1,r21/1:/{Table/51/1/9���-Max},@c421e76900] snapshot failed: r21: remote couldn't accept snapshot with error: [n3,s3],r21: cannot apply snapshot: snapshot intersects existing range [n3,s3,r18/3:/{Table/51/1/6���-Max}]
I170418 15:11:56.080600 39872 ccl/storageccl/export.go:76  [n1,s1,r5/1:/Table/{0-11},@c422380600] export [/Table/3/1,/Table/3/2)
I170418 15:11:56.111366 39946 ccl/storageccl/export.go:76  [n1,s1,r15/1:/Table/51/1/{300-400},@c420a74600] export [/Table/51/1/300,/Table/51/1/400)
I170418 15:11:56.112411 40004 ccl/storageccl/export.go:76  [n1,s1,r17/1:/Table/51/1/{500-600},@c422380000] export [/Table/51/1/500,/Table/51/1/600)
I170418 15:11:56.116111 39992 ccl/storageccl/export.go:76  [n1,s1,r16/1:/Table/51/1/{400-500},@c42454ec00] export [/Table/51/1/400,/Table/51/1/500)
I170418 15:11:56.116410 39977 ccl/storageccl/export.go:76  [n1,s1,r5/1:/Table/{0-11},@c422380600] export [/Table/4/1,/Table/4/2)
I170418 15:11:56.118095 39990 ccl/storageccl/export.go:76  [n1,s1,r21/1:/{Table/51/1/9���-Max},@c421e76900] export [/Table/51/1/900,/Table/51/2)
I170418 15:11:56.120279 39995 ccl/storageccl/export.go:76  [n1,s1,r19/1:/Table/51/1/{700-800},@c420e01200] export [/Table/51/1/700,/Table/51/1/800)
I170418 15:11:56.132307 39839 ccl/storageccl/export.go:76  [n1,s1,r12/1:/Table/51{-/1/100},@c420e01800] export [/Table/51/1,/Table/51/1/100)
I170418 15:11:56.156280 39994 ccl/storageccl/export.go:76  [n1,s1,r20/1:/Table/51/1/{800-900},@c4219dac00] export [/Table/51/1/800,/Table/51/1/900)
I170418 15:11:56.158407 39991 ccl/storageccl/export.go:76  [n1,s1,r14/1:/Table/51/1/{200-300},@c4219e3500] export [/Table/51/1/200,/Table/51/1/300)
I170418 15:11:56.160731 39841 ccl/storageccl/export.go:76  [n1,s1,r18/1:/Table/51/1/{600-700},@c421d15b00] export [/Table/51/1/600,/Table/51/1/700)
I170418 15:11:56.161349 39993 ccl/storageccl/export.go:76  [n1,s1,r13/1:/Table/51/1/{100-200},@c424258f00] export [/Table/51/1/100,/Table/51/1/200)
I170418 15:11:56.630099 37912 storage/replica_raftstorage.go:415  [raftsnapshot,n1,s1,r20/1:/Table/51/1/{800-900},@c4219dac00] generated Raft snapshot fa7a05fe at index 18
E170418 15:11:56.648716 37912 storage/queue.go:634  [raftsnapshot,n1,s1,r20/1:/Table/51/1/{800-900},@c4219dac00] snapshot failed: r20: remote couldn't accept snapshot with error: [n3,s3],r20: cannot apply snapshot: snapshot intersects existing range [n3,s3,r19/3:/{Table/51/1/7���-Max}]
I170418 15:11:56.658284 37912 storage/replica_raftstorage.go:415  [raftsnapshot,n1,s1,r19/1:/Table/51/1/{700-800},@c420e01200] generated Raft snapshot ba5e73fc at index 18
I170418 15:11:56.675831 37912 storage/store.go:3332  [raftsnapshot,n1,s1,r19/1:/Table/51/1/{700-800},@c420e01200] streamed snapshot: kv pairs: 210, log entries: 8, 16ms
I170418 15:11:56.676828 40340 storage/replica_raftstorage.go:595  [n3,s3,r19/3:/{Table/51/1/7���-Max},@c4221c2c00] applying Raft snapshot at index 18 (id=ba5e73fc, encoded size=31821, 1 rocksdb batches, 8 log entries)
I170418 15:11:56.679336 40340 storage/replica_raftstorage.go:603  [n3,s3,r19/3:/Table/51/1/{700-800},@c4221c2c00] applied Raft snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I170418 15:11:56.815394 37912 storage/replica_raftstorage.go:415  [raftsnapshot,n1,s1,r21/1:/{Table/51/1/9���-Max},@c421e76900] generated Raft snapshot 43c8e8af at index 14
I170418 15:11:56.820380 37912 storage/store.go:3332  [raftsnapshot,n1,s1,r21/1:/{Table/51/1/9���-Max},@c421e76900] streamed snapshot: kv pairs: 209, log entries: 4, 4ms
I170418 15:11:56.827253 40373 storage/replica_raftstorage.go:595  [n3,s3,r21/3:{-},@c4247fcf00] applying Raft snapshot at index 14 (id=43c8e8af, encoded size=37756, 1 rocksdb batches, 4 log entries)
I170418 15:11:56.832269 40373 storage/replica_raftstorage.go:603  [n3,s3,r21/3:/{Table/51/1/9���-Max},@c4247fcf00] applied Raft snapshot in 5ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I170418 15:11:57.639385 39376 sql/event_log.go:101  [client=127.0.0.1:49575,user=root,n1] Event: "drop_table", target: 51, info: {TableName:bank Statement:DROP TABLE bench.bank User:root CascadeDroppedViews:[]}
I170418 15:11:57.712743 39376 sql/lease.go:372  [client=127.0.0.1:49575,user=root,n1] publish: descID=51 (bank) version=2 mtime=2017-04-18 15:11:57.712622789 +0000 UTC
I170418 15:11:57.739356 37912 storage/replica_raftstorage.go:415  [raftsnapshot,n1,s1,r20/1:/Table/51/1/{800-900},@c4219dac00] generated Raft snapshot d626bf05 at index 18
I170418 15:11:57.753891 37912 storage/store.go:3332  [raftsnapshot,n1,s1,r20/1:/Table/51/1/{800-900},@c4219dac00] streamed snapshot: kv pairs: 210, log entries: 8, 14ms
I170418 15:11:57.755887 40681 storage/replica_raftstorage.go:595  [n3,s3,r20/3:{-},@c42487a900] applying Raft snapshot at index 18 (id=d626bf05, encoded size=31821, 1 rocksdb batches, 8 log entries)
I170418 15:11:57.758122 40681 storage/replica_raftstorage.go:603  [n3,s3,r20/3:/Table/51/1/{800-900},@c42487a900] applied Raft snapshot in 2ms [clear=0ms batch=0ms entries=0ms commit=1ms]
I170418 15:11:58.044334 39376 ccl/sqlccl/restore.go:483  [client=127.0.0.1:49575,user=root,n1] presplitting 10 ranges
I170418 15:11:58.045932 40787 storage/replica_command.go:2626  [n1,s1,r21/1:/{Table/51/1/9���-Max},@c421e76900] initiating a split of this range at key /Table/52/1/500 [r22]
I170418 15:11:58.143319 40749 storage/replica_command.go:2626  [n1,s1,r22/1:/{Table/52/1/5���-Max},@c422d80900] initiating a split of this range at key /Table/52/1/800 [r23]
I170418 15:11:58.150689 40762 storage/replica_command.go:2626  [n1,s1,r21/1:/Table/5{1/1/900-2/1/500},@c421e76900] initiating a split of this range at key /Table/52/1/200 [r24]
I170418 15:11:58.264962 40735 storage/replica_command.go:2626  [n1,s1,r22/1:/Table/52/1/{500-800},@c422d80900] initiating a split of this range at key /Table/52/1/700 [r25]
I170418 15:11:58.285130 40854 storage/replica_command.go:2626  [n1,s1,r23/1:/{Table/52/1/8���-Max},@c424236900] initiating a split of this range at key /Table/52/1/900 [r26]
I170418 15:11:58.355162 40900 storage/replica_command.go:2626  [n1,s1,r24/1:/Table/52/1/{200-500},@c4221b0000] initiating a split of this range at key /Table/52/1/400 [r27]
I170418 15:11:58.365054 40847 storage/replica_command.go:2626  [n1,s1,r21/1:/Table/5{1/1/900-2/1/200},@c421e76900] initiating a split of this range at key /Table/52/1/100 [r28]
I170418 15:11:58.449431 40932 storage/replica_command.go:2626  [n1,s1,r22/1:/Table/52/1/{500-700},@c422d80900] initiating a split of this range at key /Table/52/1/600 [r29]
I170418 15:11:58.605029 41030 storage/replica_command.go:2626  [n1,s1,r24/1:/Table/52/1/{200-400},@c4221b0000] initiating a split of this range at key /Table/52/1/300 [r30]
I170418 15:11:58.627338 41047 storage/replica_command.go:2626  [n1,s1,r21/1:/Table/5{1/1/900-2/1/100},@c421e76900] initiating a split of this range at key /Table/52/1 [r31]
I170418 15:11:58.811782 41025 storage/replica_command.go:3521  [n1,s1,r23/1:/Table/52/1/{800-900},@c424236900] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r23:/Table/52/1/{800-900} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:58.824347 41157 storage/replica_command.go:3521  [n1,s1,r26/1:/{Table/52/1/9���-Max},@c424240300] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r26:/{Table/52/1/900-Max} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:58.834863 41112 storage/replica_command.go:3521  [n1,s1,r24/1:/Table/52/1/{200-300},@c4221b0000] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r24:/Table/52/1/{200-300} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:58.851661 41087 storage/replica_command.go:3521  [n1,s1,r31/1:/Table/52/1{-/100},@c422907b00] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r31:/Table/52/1{-/100} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:58.860626 41190 storage/replica_command.go:3521  [n1,s1,r29/1:/Table/52/1/{600-700},@c42247fb00] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r29:/Table/52/1/{600-700} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:58.860902 41192 storage/replica_command.go:3521  [n1,s1,r25/1:/Table/52/1/{700-800},@c424782600] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r25:/Table/52/1/{700-800} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:58.862505 41193 storage/replica_command.go:3521  [n1,s1,r30/1:/Table/52/1/{300-400},@c42177cc00] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r30:/Table/52/1/{300-400} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:58.940082 41098 storage/raft_transport.go:436  [n2] raft transport stream to node 3 established
I170418 15:11:58.951441 41234 storage/raft_transport.go:436  [n3] raft transport stream to node 2 established
I170418 15:11:58.974197 41215 storage/replica_command.go:3521  [n2,s2,r28/2:/Table/52/1/{100-200},@c4219e2000] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r28:/Table/52/1/{100-200} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:59.002255 41240 storage/replica_command.go:3521  [n2,s2,r27/2:/Table/52/1/{400-500},@c422419800] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r27:/Table/52/1/{400-500} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:59.014130 41147 storage/replica_command.go:3521  [n2,s2,r22/2:/Table/52/1/{500-600},@c424890600] change replicas (REMOVE_REPLICA {3 3 3}): read existing descriptor r22:/Table/52/1/{500-600} [r1(n1,s1), r2(n2,s2), r3(n3,s3), next=4]
I170418 15:11:59.018605 38524 storage/replica.go:2636  [n2,s2,r23/2:/Table/52/1/{800-900},@c424236c00] not quiescing: 1 pending commands
I170418 15:11:59.069520 41275 storage/replica.go:2575  [n1,s1,r23/1:/Table/52/1/{800-900},@c424236900] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:59.124703 39027 storage/store.go:3154  [n3,s3,r23/3:/Table/52/1/{800-900},@c422d6c900] added to replica GC queue (peer suggestion)
I170418 15:11:59.140620 41349 storage/replica.go:2575  [n1,s1,r25/1:/Table/52/1/{700-800},@c424782600] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:59.191471 41335 storage/replica.go:2575  [n1,s1,r24/1:/Table/52/1/{200-300},@c4221b0000] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:59.226315 39027 storage/store.go:3154  [n3,s3,r24/3:/Table/52/1/{200-300},@c420e00f00] added to replica GC queue (peer suggestion)
I170418 15:11:59.279182 41387 storage/replica.go:2575  [n2,s2,r27/2:/Table/52/1/{400-500},@c422419800] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:59.282926 41388 storage/replica.go:2575  [n2,s2,r28/2:/Table/52/1/{100-200},@c4219e2000] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:59.300980 41402 storage/replica.go:2575  [n2,s2,r22/2:/Table/52/1/{500-600},@c424890600] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:59.333715 41418 storage/replica.go:2575  [n1,s1,r30/1:/Table/52/1/{300-400},@c42177cc00] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170418 15:11:59.379831 41444 storage/replica.go:2575  [n1,s1,r26/1:/{Table/52/1/9���-Max},@c424240300] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
F170418 15:11:59.398553 41538 storage/replica.go:966  [n2,s2,r22/2:/Table/52/1/{500-600},@c424890600] lease repl={3 3 3} start=1492528319.309658317,0 epo=1 pro=1492528319.309670217,0 owned by replica {NodeID:3 StoreID:3 ReplicaID:3} that no longer exists
goroutine 41538 [running]:
github.com/cockroachdb/cockroach/pkg/util/log.getStacks(0x4075a01, 0x17c17282, 0x36add00, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:837 +0xc4
github.com/cockroachdb/cockroach/pkg/util/log.(*loggingT).outputLogEntry(0x36af240, 0xc400000004, 0x2eef14e, 0x12, 0x3c6, 0xc420a7bad0, 0xc8)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:709 +0x660
github.com/cockroachdb/cockroach/pkg/util/log.addStructured(0x7f1536402370, 0xc4222bc870, 0x4, 0x2, 0x2410037, 0x33, 0xc421fd5d88, 0x2, 0x2)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/structured.go:146 +0x313
github.com/cockroachdb/cockroach/pkg/util/log.logDepth(0x7f1536402370, 0xc4222bc870, 0x1, 0x4, 0x2410037, 0x33, 0xc421fd5d88, 0x2, 0x2)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:67 +0x9a
github.com/cockroachdb/cockroach/pkg/util/log.Fatalf(0x7f1536402370, 0xc4222bc870, 0x2410037, 0x33, 0xc421fd5d88, 0x2, 0x2)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/log.go:169 +0x90
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).redirectOnOrAcquireLease.func1(0xc424890600, 0xc421fd6170, 0x14b6869946c6d095, 0x0, 0x7f1536402370, 0xc4222bc870, 0xc421fd6150, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:966 +0xe3a
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).redirectOnOrAcquireLease(0xc424890600, 0x7f1536402370, 0xc4222bc870, 0x0, 0x0, 0x0, 0x0, 0x0, 0x2)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1032 +0x1ed
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).tryExecuteWriteBatch(0xc424890600, 0x7f1536402370, 0xc4222bc870, 0x14b6869946c3b1c1, 0x0, 0x200000002, 0x2, 0x16, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:2184 +0x1217
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).executeWriteBatch(0xc424890600, 0x7f1536402370, 0xc4222bc870, 0x14b6869946c3b1c1, 0x0, 0x200000002, 0x2, 0x16, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:2070 +0xa9
github.com/cockroachdb/cockroach/pkg/storage.(*Replica).Send(0xc424890600, 0x7f1536402370, 0xc4222bc870, 0x14b6869946c3b1c1, 0x0, 0x200000002, 0x2, 0x16, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/replica.go:1398 +0x28e
github.com/cockroachdb/cockroach/pkg/storage.(*Store).Send(0xc42114f400, 0x7f1536402370, 0xc4222bc810, 0x14b6869946c3b1c1, 0x0, 0x200000002, 0x2, 0x16, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/store.go:2558 +0x9d3
github.com/cockroachdb/cockroach/pkg/storage.(*Stores).Send(0xc420f1c900, 0x7f1536402370, 0xc4222bc7e0, 0x0, 0x0, 0x200000002, 0x2, 0x16, 0x0, 0x0, ...)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/stores.go:187 +0x24b
github.com/cockroachdb/cockroach/pkg/server.(*Node).batchInternal.func1(0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:843 +0x227
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunTaskWithErr(0xc423cdf9a0, 0xc421fd7698, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:263 +0x125
github.com/cockroachdb/cockroach/pkg/server.(*Node).batchInternal(0xc421a0f080, 0x7f1536402370, 0xc4222bc7b0, 0xc423e3e2a0, 0xc4222bc7b0, 0x0, 0x6db7a4)
	/go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:854 +0x1ca
github.com/cockroachdb/cockroach/pkg/server.(*Node).Batch(0xc421a0f080, 0x7f1536402370, 0xc4222bc7b0, 0xc423e3e2a0, 0xc421a0f080, 0x7f15364263a0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/server/node.go:871 +0xb8
github.com/cockroachdb/cockroach/pkg/roachpb._Internal_Batch_Handler(0x2397d40, 0xc421a0f080, 0x7f1536402370, 0xc4222bc690, 0xc423e3e230, 0x0, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/pkg/roachpb/api.pb.go:1868 +0x35b
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc42143ad20, 0x327dee0, 0xc420cbab60, 0xc4209845a0, 0xc4232ea750, 0x323fe80, 0x0, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:738 +0xf68
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc42143ad20, 0x327dee0, 0xc420cbab60, 0xc4209845a0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:932 +0x122b
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc4228fa1d0, 0xc42143ad20, 0x327dee0, 0xc420cbab60, 0xc4209845a0)
	/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:497 +0xb7
created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
	/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:498 +0xb9

goroutine 1 [chan receive]:
testing.(*T).Run(0xc42007cf70, 0x23df954, 0x18, 0x2437700, 0xc4204b1b01)
	/usr/local/go/src/testing/testing.go:698 +0x582
testing.runTests.func1(0xc42007cf70)
	/usr/local/go/src/testing/testing.go:882 +0xab
testing.tRunner(0xc42007cf70, 0xc4204b1d10)
	/usr/local/go/src/testing/testing.go:657 +0x108
testing.runTests(0xc420171f60, 0x3251380, 0x17, 0x17, 0x2ef5552)
	/usr/local/go/src/testing/testing.go:888 +0x4e1
testing.(*M).Run(0xc4204a7f28, 0x40756c0)
	/usr/local/go/src/testing/testing.go:822 +0x1c4
github.com/cockroachdb/cockroach/pkg/ccl/sqlccl.TestMain(0xc4204b1f28)
	/go/src/github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/main_test.go:29 +0x18a
main.main()
	github.com/cockroachdb/cockroach/pkg/ccl/sqlccl/_test/_testmain.go:96 +0x210

goroutine 17 [syscall, 1 minutes, locked to thread]:
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:2197 +0x1

goroutine 19 [chan receive]:
github.com/cockroachdb/cockroach/pkg/util/log.(*loggingT).flushDaemon(0x36af240)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:1003 +0x85
created by github.com/cockroachdb/cockroach/pkg/util/log.init.1
	/go/src/github.com/cockroachdb/cockroach/pkg/util/log/clog.go:566 +0x13a

goroutine 20 [syscall, 1 minutes]:
os/signal.signal_recv(0x6ff2b1)
	/usr/local/go/src/runtime/sigqueue.go:116 +0x104
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:22 +0x30
created by os/signal.init.1
	/usr/local/go/src/os/signal/signal_unix.go:28 +0x4f

goroutine 29 [select, locked to thread]:
runtime.gopark(0x243aa08, 0x0, 0x23c8c9a, 0x6, 0x18, 0x2)

Please assign, take a look and update the issue accordingly.

@cockroach-teamcity cockroach-teamcity added O-robot Originated from a bot. C-test-failure Broken test (automatically or manually discovered). labels Apr 18, 2017
@a-robinson
Copy link
Contributor

Same as #15023

@a-robinson
Copy link
Contributor

Same as #15003

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

2 participants