diff --git a/.chloggen/sqlserver_direct_connect.yaml b/.chloggen/sqlserver_direct_connect.yaml new file mode 100644 index 000000000000..811a08f66038 --- /dev/null +++ b/.chloggen/sqlserver_direct_connect.yaml @@ -0,0 +1,30 @@ +# Use this changelog template to create an entry for release notes. + +# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' +change_type: enhancement + +# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver) +component: sqlserverreceiver + +# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). +note: Enable direct connection to SQL Server + +# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists. +issues: [30297] + +# (Optional) One or more lines of additional information to render under the primary note. +# These lines will be padded with 2 spaces and then inserted directly into the document. +# Use pipe (|) for multiline entries. +subtext: | + Directly connecting to SQL Server will enable the receiver to gather more metrics + for observing the SQL Server instance. The first metric added with this update is + `sqlserver.database.io.read_latency`. + +# If your change doesn't affect end users or the exported elements of any package, +# you should instead start your pull request title with [chore] or use the "Skip Changelog" label. +# Optional: The change log or logs in which this entry should be included. +# e.g. '[user]' or '[user, api]' +# Include 'user' if the change is relevant to end users. +# Include 'api' if there is a change to a library API. +# Default: '[user]' +change_logs: [] diff --git a/receiver/sqlserverreceiver/README.md b/receiver/sqlserverreceiver/README.md index b8c9669eacc7..2b2bf40d23ce 100644 --- a/receiver/sqlserverreceiver/README.md +++ b/receiver/sqlserverreceiver/README.md @@ -13,8 +13,9 @@ [contrib]: https://github.com/open-telemetry/opentelemetry-collector-releases/tree/main/distributions/otelcol-contrib -The `sqlserver` receiver grabs metrics about a Microsoft SQL Server instance using the Windows Performance Counters. -Because of this, it is a Windows only receiver. +The `sqlserver` receiver grabs metrics about a Microsoft SQL Server instance. The receiver works by either using the +Windows Performance Counters, or by directly connecting to the instance and querying it. Windows Performance Counters +are only available when running on Windows. Make sure to run the collector as administrator in order to collect all performance counters for metrics. @@ -22,10 +23,19 @@ Make sure to run the collector as administrator in order to collect all performa The following settings are optional: - `collection_interval` (default = `10s`): The internal at which metrics should be emitted by this receiver. +- `instance_name` (optional): The instance name identifies the specific SQL Server instance being monitored. + If unspecified, metrics will be scraped from all instances. If configured, the `computer_name` must also be set + when running on Windows. + +Direct connection options (optional, but all must be specified to enable): +- `username`: The username used to connect to the SQL Server instance. +- `password`: The password used to connect to the SQL Server instance. +- `server`: IP Address or hostname of SQL Server instance to connect to. +- `port`: Port of the SQL Server instance to connect to. -To collect from a SQL Server with a named instance, both `computer_name` and `instance_name` are required. For a default SQL Server setup, these settings are optional. +Windows-specific options: - `computer_name` (optional): The computer name identifies the SQL Server name or IP address of the computer being monitored. -- `instance_name` (optional): The instance name identifies the specific SQL Server instance being monitored. + If specified, `instance_name` is also required to be defined. This option is ignored in non-Windows environments. Example: @@ -33,9 +43,15 @@ Example: receivers: sqlserver: collection_interval: 10s + sqlserver/1: + collection_interval: 5s + username: sa + password: securepassword + server: 0.0.0.0 + port: 1433 ``` -When a named instance is used, a computer name and a instance name must be specified. +When a named instance is used on Windows, a computer name and a instance name must be specified. Example with named instance: ```yaml diff --git a/receiver/sqlserverreceiver/documentation.md b/receiver/sqlserverreceiver/documentation.md index e08acc4c6828..e2ee51fea4d8 100644 --- a/receiver/sqlserverreceiver/documentation.md +++ b/receiver/sqlserverreceiver/documentation.md @@ -178,6 +178,34 @@ Number of users connected to the SQL Server. | ---- | ----------- | ---------- | | {connections} | Gauge | Int | +## Optional Metrics + +The following metrics are not emitted by default. Each of them can be enabled by applying the following configuration: + +```yaml +metrics: + : + enabled: true +``` + +### sqlserver.database.io.read_latency + +Total time that the users waited for reads issued on this file. + +This metric is only available when the receiver is configured to directly connect to SQL Server. + +| Unit | Metric Type | Value Type | Aggregation Temporality | Monotonic | +| ---- | ----------- | ---------- | ----------------------- | --------- | +| s | Sum | Double | Cumulative | true | + +#### Attributes + +| Name | Description | Values | +| ---- | ----------- | ------ | +| physical_filename | The physical filename of the file being monitored. | Any Str | +| logical_filename | The logical filename of the file being monitored. | Any Str | +| file_type | The type of file being monitored. | Any Str | + ## Resource Attributes | Name | Description | Values | Enabled | diff --git a/receiver/sqlserverreceiver/factory.go b/receiver/sqlserverreceiver/factory.go index 4b3203326c79..90e20ead901b 100644 --- a/receiver/sqlserverreceiver/factory.go +++ b/receiver/sqlserverreceiver/factory.go @@ -38,8 +38,11 @@ func createDefaultConfig() component.Config { func setupQueries(cfg *Config) []string { var queries []string - // TODO: Only add query if metrics are enabled - queries = append(queries, getSQLServerDatabaseIOQuery(cfg.InstanceName)) + + if cfg.MetricsBuilderConfig.Metrics.SqlserverDatabaseIoReadLatency.Enabled { + queries = append(queries, getSQLServerDatabaseIOQuery(cfg.InstanceName)) + } + return queries } diff --git a/receiver/sqlserverreceiver/factory_others_test.go b/receiver/sqlserverreceiver/factory_others_test.go index b52499bcf1ab..9df167614d26 100644 --- a/receiver/sqlserverreceiver/factory_others_test.go +++ b/receiver/sqlserverreceiver/factory_others_test.go @@ -30,6 +30,7 @@ func TestCreateMetricsReceiverOtherOS(t *testing.T) { cfg.Server = "0.0.0.0" cfg.Port = 1433 cfg.InstanceName = "instanceName" + cfg.Metrics.SqlserverDatabaseIoReadLatency.Enabled = true require.NoError(t, cfg.Validate()) require.True(t, directDBConnectionEnabled(cfg)) diff --git a/receiver/sqlserverreceiver/factory_test.go b/receiver/sqlserverreceiver/factory_test.go index d35c3de42209..74044aa1da27 100644 --- a/receiver/sqlserverreceiver/factory_test.go +++ b/receiver/sqlserverreceiver/factory_test.go @@ -87,6 +87,7 @@ func TestCreateMetricsReceiver(t *testing.T) { cfg.Server = "0.0.0.0" cfg.Port = 1433 require.NoError(t, cfg.Validate()) + cfg.Metrics.SqlserverDatabaseIoReadLatency.Enabled = true require.True(t, directDBConnectionEnabled(cfg)) require.Equal(t, "server=0.0.0.0;user id=sa;password=password;port=1433", getDBConnectionString(cfg)) diff --git a/receiver/sqlserverreceiver/internal/metadata/generated_config.go b/receiver/sqlserverreceiver/internal/metadata/generated_config.go index b4b8ebe5e7ca..1b915f45f047 100644 --- a/receiver/sqlserverreceiver/internal/metadata/generated_config.go +++ b/receiver/sqlserverreceiver/internal/metadata/generated_config.go @@ -31,6 +31,7 @@ type MetricsConfig struct { SqlserverBatchRequestRate MetricConfig `mapstructure:"sqlserver.batch.request.rate"` SqlserverBatchSQLCompilationRate MetricConfig `mapstructure:"sqlserver.batch.sql_compilation.rate"` SqlserverBatchSQLRecompilationRate MetricConfig `mapstructure:"sqlserver.batch.sql_recompilation.rate"` + SqlserverDatabaseIoReadLatency MetricConfig `mapstructure:"sqlserver.database.io.read_latency"` SqlserverLockWaitRate MetricConfig `mapstructure:"sqlserver.lock.wait.rate"` SqlserverLockWaitTimeAvg MetricConfig `mapstructure:"sqlserver.lock.wait_time.avg"` SqlserverPageBufferCacheHitRatio MetricConfig `mapstructure:"sqlserver.page.buffer_cache.hit_ratio"` @@ -61,6 +62,9 @@ func DefaultMetricsConfig() MetricsConfig { SqlserverBatchSQLRecompilationRate: MetricConfig{ Enabled: true, }, + SqlserverDatabaseIoReadLatency: MetricConfig{ + Enabled: false, + }, SqlserverLockWaitRate: MetricConfig{ Enabled: true, }, diff --git a/receiver/sqlserverreceiver/internal/metadata/generated_config_test.go b/receiver/sqlserverreceiver/internal/metadata/generated_config_test.go index 4e8ad0069a83..b1c2ca91094f 100644 --- a/receiver/sqlserverreceiver/internal/metadata/generated_config_test.go +++ b/receiver/sqlserverreceiver/internal/metadata/generated_config_test.go @@ -29,6 +29,7 @@ func TestMetricsBuilderConfig(t *testing.T) { SqlserverBatchRequestRate: MetricConfig{Enabled: true}, SqlserverBatchSQLCompilationRate: MetricConfig{Enabled: true}, SqlserverBatchSQLRecompilationRate: MetricConfig{Enabled: true}, + SqlserverDatabaseIoReadLatency: MetricConfig{Enabled: true}, SqlserverLockWaitRate: MetricConfig{Enabled: true}, SqlserverLockWaitTimeAvg: MetricConfig{Enabled: true}, SqlserverPageBufferCacheHitRatio: MetricConfig{Enabled: true}, @@ -61,6 +62,7 @@ func TestMetricsBuilderConfig(t *testing.T) { SqlserverBatchRequestRate: MetricConfig{Enabled: false}, SqlserverBatchSQLCompilationRate: MetricConfig{Enabled: false}, SqlserverBatchSQLRecompilationRate: MetricConfig{Enabled: false}, + SqlserverDatabaseIoReadLatency: MetricConfig{Enabled: false}, SqlserverLockWaitRate: MetricConfig{Enabled: false}, SqlserverLockWaitTimeAvg: MetricConfig{Enabled: false}, SqlserverPageBufferCacheHitRatio: MetricConfig{Enabled: false}, diff --git a/receiver/sqlserverreceiver/internal/metadata/generated_metrics.go b/receiver/sqlserverreceiver/internal/metadata/generated_metrics.go index 1ec90d7e6178..71475f25e04f 100644 --- a/receiver/sqlserverreceiver/internal/metadata/generated_metrics.go +++ b/receiver/sqlserverreceiver/internal/metadata/generated_metrics.go @@ -185,6 +185,61 @@ func newMetricSqlserverBatchSQLRecompilationRate(cfg MetricConfig) metricSqlserv return m } +type metricSqlserverDatabaseIoReadLatency struct { + data pmetric.Metric // data buffer for generated metric. + config MetricConfig // metric config provided by user. + capacity int // max observed number of data points added to the metric. +} + +// init fills sqlserver.database.io.read_latency metric with initial data. +func (m *metricSqlserverDatabaseIoReadLatency) init() { + m.data.SetName("sqlserver.database.io.read_latency") + m.data.SetDescription("Total time that the users waited for reads issued on this file.") + m.data.SetUnit("s") + m.data.SetEmptySum() + m.data.Sum().SetIsMonotonic(true) + m.data.Sum().SetAggregationTemporality(pmetric.AggregationTemporalityCumulative) + m.data.Sum().DataPoints().EnsureCapacity(m.capacity) +} + +func (m *metricSqlserverDatabaseIoReadLatency) recordDataPoint(start pcommon.Timestamp, ts pcommon.Timestamp, val float64, physicalFilenameAttributeValue string, logicalFilenameAttributeValue string, fileTypeAttributeValue string) { + if !m.config.Enabled { + return + } + dp := m.data.Sum().DataPoints().AppendEmpty() + dp.SetStartTimestamp(start) + dp.SetTimestamp(ts) + dp.SetDoubleValue(val) + dp.Attributes().PutStr("physical_filename", physicalFilenameAttributeValue) + dp.Attributes().PutStr("logical_filename", logicalFilenameAttributeValue) + dp.Attributes().PutStr("file_type", fileTypeAttributeValue) +} + +// updateCapacity saves max length of data point slices that will be used for the slice capacity. +func (m *metricSqlserverDatabaseIoReadLatency) updateCapacity() { + if m.data.Sum().DataPoints().Len() > m.capacity { + m.capacity = m.data.Sum().DataPoints().Len() + } +} + +// emit appends recorded metric data to a metrics slice and prepares it for recording another set of data points. +func (m *metricSqlserverDatabaseIoReadLatency) emit(metrics pmetric.MetricSlice) { + if m.config.Enabled && m.data.Sum().DataPoints().Len() > 0 { + m.updateCapacity() + m.data.MoveTo(metrics.AppendEmpty()) + m.init() + } +} + +func newMetricSqlserverDatabaseIoReadLatency(cfg MetricConfig) metricSqlserverDatabaseIoReadLatency { + m := metricSqlserverDatabaseIoReadLatency{config: cfg} + if cfg.Enabled { + m.data = pmetric.NewMetric() + m.init() + } + return m +} + type metricSqlserverLockWaitRate struct { data pmetric.Metric // data buffer for generated metric. config MetricConfig // metric config provided by user. @@ -1037,6 +1092,7 @@ type MetricsBuilder struct { metricSqlserverBatchRequestRate metricSqlserverBatchRequestRate metricSqlserverBatchSQLCompilationRate metricSqlserverBatchSQLCompilationRate metricSqlserverBatchSQLRecompilationRate metricSqlserverBatchSQLRecompilationRate + metricSqlserverDatabaseIoReadLatency metricSqlserverDatabaseIoReadLatency metricSqlserverLockWaitRate metricSqlserverLockWaitRate metricSqlserverLockWaitTimeAvg metricSqlserverLockWaitTimeAvg metricSqlserverPageBufferCacheHitRatio metricSqlserverPageBufferCacheHitRatio @@ -1075,6 +1131,7 @@ func NewMetricsBuilder(mbc MetricsBuilderConfig, settings receiver.CreateSetting metricSqlserverBatchRequestRate: newMetricSqlserverBatchRequestRate(mbc.Metrics.SqlserverBatchRequestRate), metricSqlserverBatchSQLCompilationRate: newMetricSqlserverBatchSQLCompilationRate(mbc.Metrics.SqlserverBatchSQLCompilationRate), metricSqlserverBatchSQLRecompilationRate: newMetricSqlserverBatchSQLRecompilationRate(mbc.Metrics.SqlserverBatchSQLRecompilationRate), + metricSqlserverDatabaseIoReadLatency: newMetricSqlserverDatabaseIoReadLatency(mbc.Metrics.SqlserverDatabaseIoReadLatency), metricSqlserverLockWaitRate: newMetricSqlserverLockWaitRate(mbc.Metrics.SqlserverLockWaitRate), metricSqlserverLockWaitTimeAvg: newMetricSqlserverLockWaitTimeAvg(mbc.Metrics.SqlserverLockWaitTimeAvg), metricSqlserverPageBufferCacheHitRatio: newMetricSqlserverPageBufferCacheHitRatio(mbc.Metrics.SqlserverPageBufferCacheHitRatio), @@ -1177,6 +1234,7 @@ func (mb *MetricsBuilder) EmitForResource(rmo ...ResourceMetricsOption) { mb.metricSqlserverBatchRequestRate.emit(ils.Metrics()) mb.metricSqlserverBatchSQLCompilationRate.emit(ils.Metrics()) mb.metricSqlserverBatchSQLRecompilationRate.emit(ils.Metrics()) + mb.metricSqlserverDatabaseIoReadLatency.emit(ils.Metrics()) mb.metricSqlserverLockWaitRate.emit(ils.Metrics()) mb.metricSqlserverLockWaitTimeAvg.emit(ils.Metrics()) mb.metricSqlserverPageBufferCacheHitRatio.emit(ils.Metrics()) @@ -1240,6 +1298,11 @@ func (mb *MetricsBuilder) RecordSqlserverBatchSQLRecompilationRateDataPoint(ts p mb.metricSqlserverBatchSQLRecompilationRate.recordDataPoint(mb.startTime, ts, val) } +// RecordSqlserverDatabaseIoReadLatencyDataPoint adds a data point to sqlserver.database.io.read_latency metric. +func (mb *MetricsBuilder) RecordSqlserverDatabaseIoReadLatencyDataPoint(ts pcommon.Timestamp, val float64, physicalFilenameAttributeValue string, logicalFilenameAttributeValue string, fileTypeAttributeValue string) { + mb.metricSqlserverDatabaseIoReadLatency.recordDataPoint(mb.startTime, ts, val, physicalFilenameAttributeValue, logicalFilenameAttributeValue, fileTypeAttributeValue) +} + // RecordSqlserverLockWaitRateDataPoint adds a data point to sqlserver.lock.wait.rate metric. func (mb *MetricsBuilder) RecordSqlserverLockWaitRateDataPoint(ts pcommon.Timestamp, val float64) { mb.metricSqlserverLockWaitRate.recordDataPoint(mb.startTime, ts, val) diff --git a/receiver/sqlserverreceiver/internal/metadata/generated_metrics_test.go b/receiver/sqlserverreceiver/internal/metadata/generated_metrics_test.go index 0dff925763bb..11fbcc4b5303 100644 --- a/receiver/sqlserverreceiver/internal/metadata/generated_metrics_test.go +++ b/receiver/sqlserverreceiver/internal/metadata/generated_metrics_test.go @@ -80,6 +80,9 @@ func TestMetricsBuilder(t *testing.T) { allMetricsCount++ mb.RecordSqlserverBatchSQLRecompilationRateDataPoint(ts, 1) + allMetricsCount++ + mb.RecordSqlserverDatabaseIoReadLatencyDataPoint(ts, 1, "physical_filename-val", "logical_filename-val", "file_type-val") + defaultMetricsCount++ allMetricsCount++ mb.RecordSqlserverLockWaitRateDataPoint(ts, 1) @@ -210,6 +213,29 @@ func TestMetricsBuilder(t *testing.T) { assert.Equal(t, ts, dp.Timestamp()) assert.Equal(t, pmetric.NumberDataPointValueTypeDouble, dp.ValueType()) assert.Equal(t, float64(1), dp.DoubleValue()) + case "sqlserver.database.io.read_latency": + assert.False(t, validatedMetrics["sqlserver.database.io.read_latency"], "Found a duplicate in the metrics slice: sqlserver.database.io.read_latency") + validatedMetrics["sqlserver.database.io.read_latency"] = true + assert.Equal(t, pmetric.MetricTypeSum, ms.At(i).Type()) + assert.Equal(t, 1, ms.At(i).Sum().DataPoints().Len()) + assert.Equal(t, "Total time that the users waited for reads issued on this file.", ms.At(i).Description()) + assert.Equal(t, "s", ms.At(i).Unit()) + assert.Equal(t, true, ms.At(i).Sum().IsMonotonic()) + assert.Equal(t, pmetric.AggregationTemporalityCumulative, ms.At(i).Sum().AggregationTemporality()) + dp := ms.At(i).Sum().DataPoints().At(0) + assert.Equal(t, start, dp.StartTimestamp()) + assert.Equal(t, ts, dp.Timestamp()) + assert.Equal(t, pmetric.NumberDataPointValueTypeDouble, dp.ValueType()) + assert.Equal(t, float64(1), dp.DoubleValue()) + attrVal, ok := dp.Attributes().Get("physical_filename") + assert.True(t, ok) + assert.EqualValues(t, "physical_filename-val", attrVal.Str()) + attrVal, ok = dp.Attributes().Get("logical_filename") + assert.True(t, ok) + assert.EqualValues(t, "logical_filename-val", attrVal.Str()) + attrVal, ok = dp.Attributes().Get("file_type") + assert.True(t, ok) + assert.EqualValues(t, "file_type-val", attrVal.Str()) case "sqlserver.lock.wait.rate": assert.False(t, validatedMetrics["sqlserver.lock.wait.rate"], "Found a duplicate in the metrics slice: sqlserver.lock.wait.rate") validatedMetrics["sqlserver.lock.wait.rate"] = true diff --git a/receiver/sqlserverreceiver/internal/metadata/testdata/config.yaml b/receiver/sqlserverreceiver/internal/metadata/testdata/config.yaml index 41845c65e79a..e7470fd6536f 100644 --- a/receiver/sqlserverreceiver/internal/metadata/testdata/config.yaml +++ b/receiver/sqlserverreceiver/internal/metadata/testdata/config.yaml @@ -7,6 +7,8 @@ all_set: enabled: true sqlserver.batch.sql_recompilation.rate: enabled: true + sqlserver.database.io.read_latency: + enabled: true sqlserver.lock.wait.rate: enabled: true sqlserver.lock.wait_time.avg: @@ -56,6 +58,8 @@ none_set: enabled: false sqlserver.batch.sql_recompilation.rate: enabled: false + sqlserver.database.io.read_latency: + enabled: false sqlserver.lock.wait.rate: enabled: false sqlserver.lock.wait_time.avg: diff --git a/receiver/sqlserverreceiver/metadata.yaml b/receiver/sqlserverreceiver/metadata.yaml index fb46588a95bc..ae21ac60ce2a 100644 --- a/receiver/sqlserverreceiver/metadata.yaml +++ b/receiver/sqlserverreceiver/metadata.yaml @@ -31,6 +31,15 @@ attributes: description: The page operation types. type: string enum: [read, write] + physical_filename: + description: The physical filename of the file being monitored. + type: string + logical_filename: + description: The logical filename of the file being monitored. + type: string + file_type: + description: The type of file being monitored. + type: string metrics: sqlserver.user.connection.count: @@ -158,6 +167,16 @@ metrics: unit: "{transactions}/s" gauge: value_type: double + sqlserver.database.io.read_latency: + enabled: false + description: Total time that the users waited for reads issued on this file. + unit: "s" + sum: + monotonic: true + aggregation_temporality: cumulative + value_type: double + attributes: [physical_filename, logical_filename, file_type] + extended_documentation: This metric is only available when the receiver is configured to directly connect to SQL Server. tests: config: diff --git a/receiver/sqlserverreceiver/scraper.go b/receiver/sqlserverreceiver/scraper.go index 3da7cf452693..16f65d27bad4 100644 --- a/receiver/sqlserverreceiver/scraper.go +++ b/receiver/sqlserverreceiver/scraper.go @@ -9,8 +9,10 @@ import ( "errors" "fmt" "strconv" + "time" "go.opentelemetry.io/collector/component" + "go.opentelemetry.io/collector/pdata/pcommon" "go.opentelemetry.io/collector/pdata/pmetric" "go.opentelemetry.io/collector/receiver/scraperhelper" "go.uber.org/zap" @@ -103,6 +105,9 @@ func (s *sqlServerScraperHelper) recordDatabaseIOMetrics(ctx context.Context, rb const computerNameKey = "computer_name" const databaseNameKey = "database_name" const instanceNameKey = "sql_instance" + const physicalFilenameKey = "physical_filename" + const logicalFilenameKey = "logical_filename" + const fileTypeKey = "file_type" const readLatencyMsKey = "read_latency_ms" rows, err := s.client.QueryRows(ctx) @@ -115,6 +120,8 @@ func (s *sqlServerScraperHelper) recordDatabaseIOMetrics(ctx context.Context, rb } var errs []error + now := pcommon.NewTimestampFromTime(time.Now()) + var val float64 for i, row := range rows { if i == 0 { rb.SetSqlserverComputerName(row[computerNameKey]) @@ -122,12 +129,13 @@ func (s *sqlServerScraperHelper) recordDatabaseIOMetrics(ctx context.Context, rb rb.SetSqlserverInstanceName(row[instanceNameKey]) } - _, err = strconv.ParseFloat(row[readLatencyMsKey], 64) + val, err = strconv.ParseFloat(row[readLatencyMsKey], 64) if err != nil { err = fmt.Errorf("row %d: %w", i, err) errs = append(errs, err) + } else { + s.mb.RecordSqlserverDatabaseIoReadLatencyDataPoint(now, val/1e3, row[physicalFilenameKey], row[logicalFilenameKey], row[fileTypeKey]) } - // TODO: Set metrics here if no parsing error occurred. } if len(rows) == 0 { diff --git a/receiver/sqlserverreceiver/scraper_test.go b/receiver/sqlserverreceiver/scraper_test.go index 00c63af6dfc9..3538be3d1093 100644 --- a/receiver/sqlserverreceiver/scraper_test.go +++ b/receiver/sqlserverreceiver/scraper_test.go @@ -20,7 +20,7 @@ import ( "github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatatest/pmetrictest" ) -func TestSuccessfulScrape(t *testing.T) { +func TestEmptyScrape(t *testing.T) { cfg := createDefaultConfig().(*Config) cfg.Username = "sa" cfg.Password = "password" @@ -31,11 +31,23 @@ func TestSuccessfulScrape(t *testing.T) { assert.NoError(t, cfg.Validate()) // Ensure there aren't any scrapers when all metrics are disabled. + cfg.MetricsBuilderConfig.Metrics.SqlserverDatabaseIoReadLatency.Enabled = false scrapers := setupSQLServerScrapers(receivertest.NewNopCreateSettings(), cfg) - assert.NotEmpty(t, scrapers) + assert.Empty(t, scrapers) +} + +func TestSuccessfulScrape(t *testing.T) { + cfg := createDefaultConfig().(*Config) + cfg.Username = "sa" + cfg.Password = "password" + cfg.Port = 1433 + cfg.Server = "0.0.0.0" + cfg.MetricsBuilderConfig.ResourceAttributes.SqlserverInstanceName.Enabled = true + assert.NoError(t, cfg.Validate()) // Ensure all metrics are received when all are enabled. - scrapers = setupSQLServerScrapers(receivertest.NewNopCreateSettings(), cfg) + cfg.MetricsBuilderConfig.Metrics.SqlserverDatabaseIoReadLatency.Enabled = true + scrapers := setupSQLServerScrapers(receivertest.NewNopCreateSettings(), cfg) assert.NotNil(t, scrapers) for _, scraper := range scrapers { @@ -75,6 +87,7 @@ func TestScrapeInvalidQuery(t *testing.T) { assert.NoError(t, cfg.Validate()) // Ensure all metrics are received when all are enabled. + cfg.MetricsBuilderConfig.Metrics.SqlserverDatabaseIoReadLatency.Enabled = true scrapers := setupSQLServerScrapers(receivertest.NewNopCreateSettings(), cfg) assert.NotNil(t, scrapers) diff --git a/receiver/sqlserverreceiver/testdata/expected_database_io.yaml b/receiver/sqlserverreceiver/testdata/expected_database_io.yaml index 0967ef424bce..1fc196a61342 100644 --- a/receiver/sqlserverreceiver/testdata/expected_database_io.yaml +++ b/receiver/sqlserverreceiver/testdata/expected_database_io.yaml @@ -1 +1,216 @@ -{} +resourceMetrics: + - resource: + attributes: + - key: sqlserver.database.name + value: + stringValue: master + - key: sqlserver.instance.name + value: + stringValue: 8cac97ac9b8f + scopeMetrics: + - metrics: + - description: Total time that the users waited for reads issued on this file. + name: sqlserver.database.io.read_latency + sum: + aggregationTemporality: 2 + dataPoints: + - asDouble: 0.005 + attributes: + - key: file_type + value: + stringValue: LOG + - key: logical_filename + value: + stringValue: MSDBLog + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/MSDBLog.ldf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.008 + attributes: + - key: file_type + value: + stringValue: LOG + - key: logical_filename + value: + stringValue: mastlog + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/mastlog.ldf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.007 + attributes: + - key: file_type + value: + stringValue: LOG + - key: logical_filename + value: + stringValue: modellog + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/modellog.ldf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.002 + attributes: + - key: file_type + value: + stringValue: LOG + - key: logical_filename + value: + stringValue: templog + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/templog.ldf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.051 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: MSDBData + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/MSDBData.mdf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.062 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: master + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/master.mdf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.021 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: modeldev + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/model.mdf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.009 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: tempdev + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/tempdb.mdf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.001 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: tempdev2 + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/tempdb2.ndf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.002 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: tempdev3 + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/tempdb3.ndf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.001 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: tempdev4 + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/tempdb4.ndf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.002 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: tempdev5 + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/tempdb5.ndf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.002 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: tempdev6 + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/tempdb6.ndf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.001 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: tempdev7 + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/tempdb7.ndf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + - asDouble: 0.001 + attributes: + - key: file_type + value: + stringValue: ROWS + - key: logical_filename + value: + stringValue: tempdev8 + - key: physical_filename + value: + stringValue: /var/opt/mssql/data/tempdb8.ndf + startTimeUnixNano: "1000000" + timeUnixNano: "2000000" + isMonotonic: true + unit: s + scope: + name: otelcol/sqlserverreceiver + version: latest