-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker: v18.09.9 #7350
docker: v18.09.9 #7350
Conversation
Signed-off-by: Yoan Blanc <[email protected]>
Signed-off-by: Yoan Blanc <[email protected]>
Latest gosutil includes two backward incompatible changes: First, it removed unused Stolen field in shirou/gopsutil@cae8efc#diff-d9747e2da342bdb995f6389533ad1a3d . Second, it updated the Windows cpu stats calculation to be inline with other platforms, where it returns absolate stats rather than percentages. See shirou/gopsutil#611.
Signed-off-by: Yoan Blanc <[email protected]>
Signed-off-by: Yoan Blanc <[email protected]>
Is it a flaky test? I get it to run just fine locally. $ sudo -E PATH="$GOPATH/bin:/usr/local/go/bin:$PATH" go test ./nomad -v -run TestRPC_Lim
its_Streaming
=== RUN TestRPC_Limits_Streaming
=== PAUSE TestRPC_Limits_Streaming
=== CONT TestRPC_Limits_Streaming
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.372+0100 [DEBUG] docker/driver.go:1500: plugin_loader.docker: using client connection initialized from environment: plugin_dir=
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.372+0100 [DEBUG] docker/driver.go:1500: plugin_loader.docker: using client connection initialized from environment: plugin_dir=
[INFO] freeport: detected ephemeral port range of [32768, 60999]
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.378+0100 [INFO] raft/api.go:549: nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:127.0.0.1:9501 Address:127.0.0.1:9501}]"
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.378+0100 [INFO] raft/raft.go:152: nomad.raft: entering follower state: follower="Node at 127.0.0.1:9501 [Follower]" leader=
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.378+0100 [INFO] go-hclog/stdlog.go:46: nomad: serf: EventMemberJoin: nomad-001.global 127.0.0.1
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.378+0100 [INFO] nomad/server.go:1398: nomad: starting scheduling worker(s): num_workers=8 schedulers=[service, batch, system, noop, _core]
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.379+0100 [INFO] nomad/serf.go:60: nomad: adding server: server="nomad-001.global (Addr: 127.0.0.1:9501) (DC: dc1)"
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.474+0100 [WARN] raft/raft.go:214: nomad.raft: heartbeat timeout reached, starting election: last-leader=
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.474+0100 [INFO] raft/raft.go:250: nomad.raft: entering candidate state: node="Node at 127.0.0.1:9501 [Candidate]" term=2
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.474+0100 [DEBUG] raft/raft.go:268: nomad.raft: votes: needed=1
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.474+0100 [DEBUG] raft/raft.go:287: nomad.raft: vote granted: from=127.0.0.1:9501 term=2 tally=1
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.474+0100 [INFO] raft/raft.go:292: nomad.raft: election won: tally=1
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.474+0100 [INFO] raft/raft.go:363: nomad.raft: entering leader state: leader="Node at 127.0.0.1:9501 [Leader]"
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.474+0100 [INFO] nomad/leader.go:71: nomad: cluster leadership acquired
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.475+0100 [TRACE] nomad/fsm.go:292: nomad.fsm: ClusterSetMetadata: cluster_id=d21d1d38-041e-3226-94d3-348943565dec create_time=1584265243475565516
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.475+0100 [INFO] nomad/leader.go:1415: nomad.core: established cluster id: cluster_id=d21d1d38-041e-3226-94d3-348943565dec create_time=1584265243475565516
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.476+0100 [TRACE] drainer/watch_jobs.go:145: nomad.drain.job_watcher: getting job allocs at index: index=1
TestRPC_Limits_Streaming: rpc_test.go:965: expect connection to be rejected due to limit
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:43.933+0100 [ERROR] nomad/rpc.go:314: nomad.rpc: rejecting client for exceeding maximum streaming RPC connections: remote_addr=127.0.0.1:35080 stream_limit=80
TestRPC_Limits_Streaming: rpc_test.go:989: expect streaming connection 0 to exit with error
TestRPC_Limits_Streaming: rpc_test.go:934: connection 0 died with error: (*net.OpError) read tcp 127.0.0.1:34920->127.0.0.1:9501: use of closed network connection
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:45.936+0100 [INFO] nomad/server.go:594: nomad: shutting down server
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:45.936+0100 [WARN] go-hclog/stdlog.go:48: nomad: serf: Shutdown without a Leave
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:45.937+0100 [TRACE] drainer/watch_jobs.go:147: nomad.drain.job_watcher: retrieved allocs for draining jobs: num_allocs=0 index=0 error="context canceled"
TestRPC_Limits_Streaming: testlog.go:34: 2020-03-15T10:40:45.937+0100 [TRACE] drainer/watch_jobs.go:153: nomad.drain.job_watcher: shutting down
--- PASS: TestRPC_Limits_Streaming (3.57s)
PASS
ok github.com/hashicorp/nomad/nomad (cached) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow - this is amazing! Thank you so much! Resolving the versions that work together isn't so straightforward!
And yes, that test is flaky now.
@notnoop wooot! I've got a feeling that the jump to 19.03.x will be a bigger leap, but we've got to fix the gopsutil thingy. |
I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions. |
Pin Docker to v18.09.9. /docker/docker and /moby/moby are now pulled from docker/engine which is properly tagged.
The goal is to be able to pin the "latest" version, reading 19.03.8+, which will involve deeper changes which I'm not willing to make right now. Let's see how far can I go with this.