You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After digging some more it looks like this might be intentional:
// Don't ever hit the pool limit for syncing
config := cluster.dialInfo.Copy()
config.PoolLimit = 0
This is unfortunate because it doubles the number of connections that we think we're making to the primary. Can an option be added to make the pool limit be a hard limit and if the pool is full/used then fail the sync?
Thanks for taking the time to report this issue. We are happy to review pull requests, so feel free to send one with the changes to address the problem with the pool limit.
Despite setting
PoolLimit: 1
, the mgo driver proceeds to make 2 connections to the primary.What version of MongoDB are you using (
mongod --version
)?What version of Go are you using (
go version
)?What operating system and processor architecture are you using (
go env
)?What did you do?
Setup a 3 member replica set with 1 primary, 1 secondary, 1 arbiter.
Run
Where
127.0.0.1:27017
is an address to a mongo node in the replica set.It'll starting printing out:
Until after 30 seconds it'll start printing out:
Now there's 3 sockets alive, 2 to the primary and 1 to the secondary.
Here are the debug logs from running that with the initial address being
unity.node.gce-us-central1.admiral:27017
:Can you reproduce the issue on the latest
development
branch?Yes, the same thing happens.
The text was updated successfully, but these errors were encountered: