You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we recently tried to integrate the agent with our dedicated mongo clusters, so we could correlate their metrics with ours more easily. We're using dedicated cluster / replica sets, which allows us to group multiple DBs on each cluster/host.
The integration documentation (visible within Datadog's integration panel) recommends to create a read-only admin user, or an admin user with the clusterMonitor role for Mongodb 3.0 (as a side note, this is also available in mongo 2.6).
Here comes the issue: the service check will report dbstats() on the database configured, by default admin. I don't think anybody really cares about such metrics (collections, dataSize, fileSize...) for this DB , while those would be really useful on "real", business databases. In this scenario, only the cluster-wide metrics (from serverStatus(), replSetGetStatus()) are useful.
Of course, we could create a read-only user for each DB, and duplicate the mongo URI on the mongo.yaml, but it's extra cumbersome maintenance/setup, and would mean running the cluster-wide monitoring commands for each specific DB several times in a row (even though they're on the same host). It also defeats the purpose of using a clusterMonitor user role, which grants permission for dbStats() "on all databases in the cluster".
For context on the authentication, note that the DB currently listed in the URI is the "authentication database" for the user, which is always going to be admin for admin users. Hence the need to decouple the authentication part and monitored objects.
I see two ways to address this. Both involve keeping the existing server connection URI in the config, and keeping the existing behavior for cluster-wide metric collection (as the DB used doesn't matter).
Option A
Add a new server.dbs setting, containing the list of DBs to monitor. The service checks uses the connection URI to log in, and loop over the dbs list to collect and report dbstats() metrics.
If no dbs value is configured, the service check keeps the same behavior as currently (monitor the DB to log into).
Option B
If we're an admin (especially with clusterMonitor privileges), we can just use the listDatabases command and loop/report for each of the existing DBs.
An explicit setting to control whether to use that new behavior or the current one would most likely be needed.
In both options, each DB can be included as a tag when reporting its metrics.
IMO, option B is the most attractive, and is exactly what clusterMonitor is made for. Sorry for the rather long message, just wanted to be extensive for context, and ramifications within the documentation.
The text was updated successfully, but these errors were encountered:
Great! As you might have guessed from that lengthy intro, we're very interested in this feature. I unfortunately don't have the time to actually work on the PR, but would be happy to help with the specs if need be.
Good day,
we recently tried to integrate the agent with our dedicated mongo clusters, so we could correlate their metrics with ours more easily. We're using dedicated cluster / replica sets, which allows us to group multiple DBs on each cluster/host.
The integration documentation (visible within Datadog's integration panel) recommends to create a read-only admin user, or an admin user with the
clusterMonitor
role for Mongodb 3.0 (as a side note, this is also available in mongo 2.6).Here comes the issue: the service check will report
dbstats()
on the database configured, by defaultadmin
. I don't think anybody really cares about such metrics (collections, dataSize, fileSize...) for this DB , while those would be really useful on "real", business databases. In this scenario, only the cluster-wide metrics (fromserverStatus()
,replSetGetStatus()
) are useful.Of course, we could create a read-only user for each DB, and duplicate the mongo URI on the
mongo.yaml
, but it's extra cumbersome maintenance/setup, and would mean running the cluster-wide monitoring commands for each specific DB several times in a row (even though they're on the same host). It also defeats the purpose of using aclusterMonitor
user role, which grants permission fordbStats()
"on all databases in the cluster".For context on the authentication, note that the DB currently listed in the URI is the "authentication database" for the user, which is always going to be
admin
for admin users. Hence the need to decouple the authentication part and monitored objects.I see two ways to address this. Both involve keeping the existing
server
connection URI in the config, and keeping the existing behavior for cluster-wide metric collection (as the DB used doesn't matter).Option A
Add a new
server.dbs
setting, containing the list of DBs to monitor. The service checks uses the connection URI to log in, and loop over thedbs
list to collect and reportdbstats()
metrics.If no
dbs
value is configured, the service check keeps the same behavior as currently (monitor the DB to log into).Option B
If we're an admin (especially with
clusterMonitor
privileges), we can just use thelistDatabases
command and loop/report for each of the existing DBs.An explicit setting to control whether to use that new behavior or the current one would most likely be needed.
In both options, each DB can be included as a tag when reporting its metrics.
IMO, option B is the most attractive, and is exactly what
clusterMonitor
is made for. Sorry for the rather long message, just wanted to be extensive for context, and ramifications within the documentation.The text was updated successfully, but these errors were encountered: