Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate generating compatibility table for different logstash version #158

Closed
kuskoman opened this issue Aug 12, 2023 · 4 comments
Closed
Labels
documentation Improvements or additions to documentation duplicate This issue or pull request already exists github_actions Pull requests that update GitHub Actions code

Comments

@kuskoman
Copy link
Owner

We need to automate proccess of finding compatible metrics for multiple versions of logstash. We need to:

  • create a development setup so that multiple logstash versions could be checked simultaneously
  • create a script that will check whether all metrics are working properly with given logstash version (some with be left blank on old logstash versions)
  • output table with metrics compatibility to a markdown file
@kuskoman kuskoman added documentation Improvements or additions to documentation github_actions Pull requests that update GitHub Actions code sweep labels Aug 12, 2023
@sweep-ai
Copy link
Contributor

sweep-ai bot commented Aug 12, 2023

Here's the PR! #160.

⚡ Sweep Free Trial: I used GPT-4 to create this ticket. You have 5 GPT-4 tickets left for the month and 2 for the day. For more GPT-4 tickets, visit our payment portal.To get Sweep to recreate this ticket, leave a comment prefixed with "sweep:" or edit the issue.


Step 1: 🔍 Code Search

I found the following snippets in your repository. I will now analyze these snippets and come up with a plan.

Some code snippets I looked at (click to expand). If some file is missing from here, you can mention the path in the ticket description.

## Building
### Makefile
#### Available Commands
<!--- GENERATED by ./scripts/add_descriptions_to_readme.sh --->
- `make all`: Builds binary executables for all OS (Win, Darwin, Linux).
- `make run`: Runs the Go Exporter application.
- `make build-linux`: Builds a binary executable for Linux.
- `make build-darwin`: Builds a binary executable for Darwin.
- `make build-windows`: Builds a binary executable for Windows.
- `make build-docker`: Builds a Docker image for the Go Exporter application.
- `make build-docker-multi`: Builds a multi-arch Docker image (`amd64` and `arm64`).
- `make clean`: Deletes all binary executables in the out directory.
- `make test`: Runs all tests.
- `make test-coverage`: Displays test coverage report.
- `make compose`: Starts a Docker-compose configuration.
- `make wait-for-compose`: Starts a Docker-compose configuration until it's ready.
- `make compose-down`: Stops a Docker-compose configuration.
- `make verify-metrics`: Verifies the metrics from the Go Exporter application.
- `make pull`: Pulls the Docker image from the registry.
- `make logs`: Shows logs from the Docker-compose configuration.
- `make minify`: Minifies the binary executables.
- `make install-helm-readme`: Installs readme-generator-for-helm tool.
- `make helm-readme`: Generates Helm chart README.md file.
- `make help`: Shows info about available commands.
<!--- **************************************************** --->
#### File Structure
The main Go Exporter application is located in the cmd/exporter/main.go file.
The binary executables are saved in the out directory.
#### Example Usage
<!--- GENERATED by ./scripts/add_descriptions_to_readme.sh --->
Builds binary executables for all OS (Win, Darwin, Linux):
make all
Runs the Go Exporter application:
make run
Builds a binary executable for Linux:
make build-linux
Builds a binary executable for Darwin:
make build-darwin
Builds a binary executable for Windows:
make build-windows
Builds a Docker image for the Go Exporter application:
make build-docker
Builds a multi-arch Docker image (`amd64` and `arm64`):
make build-docker-multi
Deletes all binary executables in the out directory:
make clean
Runs all tests:
make test
Displays test coverage report:
make test-coverage
Starts a Docker-compose configuration:
make compose
Starts a Docker-compose configuration until it's ready:
make wait-for-compose
Stops a Docker-compose configuration:
make compose-down
Verifies the metrics from the Go Exporter application:
make verify-metrics
Pulls the Docker image from the registry:
make pull
Shows logs from the Docker-compose configuration:
make logs
Minifies the binary executables:
make minify
Installs readme-generator-for-helm tool:
make install-helm-readme
Generates Helm chart README.md file:
make helm-readme
Shows info about available commands:
make help
<!--- **************************************************** --->
## Helper Scripts
Application repository contains some helper scripts, which can be used to improve process
of building, testing, and running the application. These scripts are not useful for the end user,
but they can be useful for all potential contributors.
The helper scripts are located in the [scripts](./scripts/) directory.
### add_metrics_to_readme.sh
This [script](./scripts/add_metrics_to_readme.sh) is used to add metrics table to the README.md file.
Usage:
./scripts/add_metrics_to_readme.sh
### create_release_notes.sh
This [script](./scripts/create_release_notes.sh) is used to create release notes for the GitHub release.
Used primarily by the [CI workflow](./.github/workflows/go-application.yml).
### generate_helm_readme.sh
This [script](./scripts/generate_helm_readme.sh) is used to generate Helm chart [README.md](./chart/README.md) file.
The readme contains all the configuration variables from the [values.yaml](./chart/values.yaml) file.
### install_helm_readme_generator.sh
This [script](./scripts/install_helm_readme_generator.sh) is used to install
[readme-generator-for-helm](https://github.com/bitnami-labs/readme-generator-for-helm) tool.
The tool is used to generate Helm chart [README.md](./chart/README.md) file.
The script installs the tool under [helm-generator](./helm-generator) directory.
### verify_metrics.sh
This [script](./scripts/verify_metrics.sh) is used to verify the metrics from the Go Exporter application.
Can be used both locally and in the CI workflow.
./scripts/verify_metrics.sh
## Testing process
The application contains both unit and integration tests. All the tests are executed in the CI workflow.
### Unit Tests
Unit tests are located in the same directories as the tested files.
To run all unit tests, use the following command:
make test
### Integration Tests
Integration tests checks if Prometheus metrics are exposed properly.
To run them you must setup development [docker-compose](./docker-compose.yml) file.
make wait-for-compose
Then you can run the tests:
make verify-metrics
## Grafana Dashboard
A Grafana Dashboard designed for metrics from Logstash-exporter on Kubernetes is available at https://grafana.com/grafana/dashboards/18628-logstash-on-kubernetes-dashboard/. This dashboard's JSON source is at [excalq/grafana-logstash-kubernetes](https://github.com/excalq/grafana-logstash-kubernetes).
(If not using Kubernetes, change `$pod` to `$instance` in the JSON.)
<img src="https://grafana.com/api/dashboards/18628/images/14184/image" width="300">
## Additional Information
This projects code was reviewed by [Boldly Go](https://www.youtube.com/@boldlygo)
in an awesome [video](https://www.youtube.com/watch?v=Oe6L5ZmqCDE), which in
a huge way helped me to improve the code quality.
## Contributing
If you want to contribute to this project, please read the [CONTRIBUTING.md](./CONTRIBUTING.md) file.
## Metrics
Table of exported metrics:
<!-- METRICS_TABLE_START -->
| Name | Type | Description |
| ----------- | ----------- | ----------- |
| logstash_exporter_build_info | gauge | A metric with a constant '1' value labeled by version, revision, branch, goversion from which logstash_exporter was built, and the goos and goarch for the build. |
| logstash_info_build | counter | A metric with a constant '1' value labeled by build date, sha, and snapshot of the logstash instance. |
| logstash_info_node | counter | A metric with a constant '1' value labeled by node name, version, host, http_address, and id of the logstash instance. |
| logstash_info_pipeline_batch_delay | counter | Amount of time to wait for events to fill the batch before sending to the filter and output stages. |
| logstash_info_pipeline_batch_size | counter | Number of events to retrieve from the input queue before sending to the filter and output stages. |
| logstash_info_pipeline_workers | counter | Number of worker threads that will process pipeline events. |
| logstash_info_status | counter | A metric with a constant '1' value labeled by status. |
| logstash_info_up | gauge | A metric that returns 1 if the node is up, 0 otherwise. |
| logstash_stats_events_duration_millis | gauge | Duration of events processing in milliseconds. |
| logstash_stats_events_filtered | gauge | Number of events filtered out. |
| logstash_stats_events_in | gauge | Number of events received. |
| logstash_stats_events_out | gauge | Number of events out. |
| logstash_stats_events_queue_push_duration_millis | gauge | Duration of events push to queue in milliseconds. |
| logstash_stats_flow_filter_current | gauge | Current number of events in the filter queue. |
| logstash_stats_flow_filter_lifetime | gauge | Lifetime number of events in the filter queue. |
| logstash_stats_flow_input_current | gauge | Current number of events in the input queue. |
| logstash_stats_flow_input_lifetime | gauge | Lifetime number of events in the input queue. |
| logstash_stats_flow_output_current | gauge | Current number of events in the output queue. |
| logstash_stats_flow_output_lifetime | gauge | Lifetime number of events in the output queue. |
| logstash_stats_flow_queue_backpressure_current | gauge | Current number of events in the backpressure queue. |
| logstash_stats_flow_queue_backpressure_lifetime | gauge | Lifetime number of events in the backpressure queue. |
| logstash_stats_flow_worker_concurrency_current | gauge | Current number of workers. |
| logstash_stats_flow_worker_concurrency_lifetime | gauge | Lifetime number of workers. |
| logstash_stats_jvm_mem_heap_committed_bytes | gauge | Amount of heap memory in bytes that is committed for the Java virtual machine to use. |
| logstash_stats_jvm_mem_heap_max_bytes | gauge | Maximum amount of heap memory in bytes that can be used for memory management. |
| logstash_stats_jvm_mem_heap_used_bytes | gauge | Amount of used heap memory in bytes. |
| logstash_stats_jvm_mem_heap_used_percent | gauge | Percentage of the heap memory that is used. |
| logstash_stats_jvm_mem_non_heap_committed_bytes | gauge | Amount of non-heap memory in bytes that is committed for the Java virtual machine to use. |
| logstash_stats_jvm_mem_pool_committed_bytes | gauge | Amount of bytes that are committed for the Java virtual machine to use in a given JVM memory pool. |
| logstash_stats_jvm_mem_pool_max_bytes | gauge | Maximum amount of bytes that can be used in a given JVM memory pool. |
| logstash_stats_jvm_mem_pool_peak_max_bytes | gauge | Highest value of bytes that were used in a given JVM memory pool. |
| logstash_stats_jvm_mem_pool_peak_used_bytes | gauge | Peak used bytes of a given JVM memory pool. |
| logstash_stats_jvm_mem_pool_used_bytes | gauge | Currently used bytes of a given JVM memory pool. |
| logstash_stats_jvm_threads_count | gauge | Number of live threads including both daemon and non-daemon threads. |
| logstash_stats_jvm_threads_peak_count | gauge | Peak live thread count since the Java virtual machine started or peak was reset. |
| logstash_stats_jvm_uptime_millis | gauge | Uptime of the JVM in milliseconds. |
| logstash_stats_pipeline_dead_letter_queue_dropped_events | counter | Number of events dropped by the dead letter queue. |
| logstash_stats_pipeline_dead_letter_queue_expired_events | counter | Number of events expired in the dead letter queue. |
| logstash_stats_pipeline_dead_letter_queue_max_size_in_bytes | counter | Maximum size of the dead letter queue in bytes. |
| logstash_stats_pipeline_dead_letter_queue_size_in_bytes | counter | Current size of the dead letter queue in bytes. |
| logstash_stats_pipeline_events_duration | counter | Time needed to process event. |
| logstash_stats_pipeline_events_filtered | counter | Number of events that have been filtered out by this pipeline. |
| logstash_stats_pipeline_events_in | counter | Number of events that have been inputted into this pipeline. |
| logstash_stats_pipeline_events_out | counter | Number of events that have been processed by this pipeline. |
| logstash_stats_pipeline_events_queue_push_duration | counter | Time needed to push event to queue. |
| logstash_stats_pipeline_flow_filter_current | gauge | Current number of events in the filter queue. |
| logstash_stats_pipeline_flow_filter_lifetime | counter | Lifetime number of events in the filter queue. |
| logstash_stats_pipeline_flow_input_current | gauge | Current number of events in the input queue. |
| logstash_stats_pipeline_flow_input_lifetime | counter | Lifetime number of events in the input queue. |
| logstash_stats_pipeline_flow_output_current | gauge | Current number of events in the output queue. |
| logstash_stats_pipeline_flow_output_lifetime | counter | Lifetime number of events in the output queue. |
| logstash_stats_pipeline_flow_queue_backpressure_current | gauge | Current number of events in the backpressure queue. |
| logstash_stats_pipeline_flow_queue_backpressure_lifetime | counter | Lifetime number of events in the backpressure queue. |
| logstash_stats_pipeline_flow_worker_concurrency_current | gauge | Current number of workers. |
| logstash_stats_pipeline_flow_worker_concurrency_lifetime | counter | Lifetime number of workers. |
| logstash_stats_pipeline_plugin_bulk_requests_errors | counter | Number of bulk request errors. |
| logstash_stats_pipeline_plugin_bulk_requests_responses | counter | Bulk request HTTP response counts by code. |
| logstash_stats_pipeline_plugin_documents_non_retryable_failures | counter | Number of output events with non-retryable failures. |
| logstash_stats_pipeline_plugin_documents_successes | counter | Number of successful bulk requests. |
| logstash_stats_pipeline_plugin_events_duration | counter | Time spent processing events in this plugin. |
| logstash_stats_pipeline_plugin_events_in | counter | Number of events received this pipeline. |
| logstash_stats_pipeline_plugin_events_out | counter | Number of events output by this pipeline. |
| logstash_stats_pipeline_plugin_events_queue_push_duration | counter | Time spent pushing events into the input queue. |
| logstash_stats_pipeline_queue_events_count | counter | Number of events in the queue. |
| logstash_stats_pipeline_queue_events_queue_size | counter | Number of events that the queue can accommodate |
| logstash_stats_pipeline_queue_max_size_in_bytes | counter | Maximum size of given queue in bytes. |
| logstash_stats_pipeline_reloads_failures | counter | Number of failed pipeline reloads. |
| logstash_stats_pipeline_reloads_successes | counter | Number of successful pipeline reloads. |
| logstash_stats_pipeline_reloads_last_failure_timestamp | gauge | Timestamp of last failed pipeline reload. |
| logstash_stats_pipeline_reloads_last_success_timestamp | gauge | Timestamp of last successful pipeline reload. |
| logstash_stats_pipeline_up | gauge | Whether the pipeline is up or not. |
| logstash_stats_process_cpu_load_average_1m | gauge | Total 1m system load average. |
| logstash_stats_process_cpu_load_average_5m | gauge | Total 5m system load average. |
| logstash_stats_process_cpu_load_average_15m | gauge | Total 15m system load average. |
| logstash_stats_process_cpu_percent | gauge | CPU usage of the process. |
| logstash_stats_process_cpu_total_millis | gauge | Total CPU time used by the process. |
| logstash_stats_process_max_file_descriptors | gauge | Limit of open file descriptors. |
| logstash_stats_process_mem_total_virtual | gauge | Total virtual memory used by the process. |
| logstash_stats_process_open_file_descriptors | gauge | Number of currently open file descriptors. |
| logstash_stats_queue_events_count | gauge | Number of events in the queue. |
| logstash_stats_reload_failures | gauge | Number of failed reloads. |
| logstash_stats_reload_successes | gauge | Number of successful reloads. |
<!-- METRICS_TABLE_END -->

# Logstash-exporter
[![codecov](https://codecov.io/gh/kuskoman/logstash-exporter/branch/master/graph/badge.svg?token=ISIVB93OC6)](https://codecov.io/gh/kuskoman/logstash-exporter)
Export metrics from Logstash to Prometheus.
The project was created as rewrite of existing awesome application
[logstash_exporter](https://github.com/BonnierNews/logstash_exporter),
which was also written in Go, but it was not maintained for a long time.
A lot of code was reused from the original project.
## Usage
### Running the app
The application can be run in two ways:
- using the binary executable
- using the Docker image
Additionally [Helm chart](./chart/) is provided for easy deployment to Kubernetes.
#### Binary Executable
The binary executable can be downloaded from the [releases page](https://github.com/kuskoman/logstash-exporter/releases).
Linux binary is available under `https://github.com/kuskoman/logstash-exporter/releases/download/v${VERSION}/logstash-exporter-linux`.
The binary can be run without additional arguments, as the configuration is loaded from the `.env` file and environment variables.
Each binary should contain a SHA256 checksum file, which can be used to verify the integrity of the binary.
VERSION="test-tag" \
OS="linux" \
wget "https://github.com/kuskoman/logstash-exporter/releases/download/${VERSION}/logstash-exporter-${OS}" && \
wget "https://github.com/kuskoman/logstash-exporter/releases/download/${VERSION}/logstash-exporter-${OS}.sha256" && \
sha256sum -c logstash-exporter-${OS}.sha256
It is recommended to use the binary executable in combination with the [systemd](https://systemd.io/) service.
The application should not require any of root privileges, so it is recommended to run it as a non-root user.
##### Unstable (master) version
The unstable version of the application can be downloaded from the
[GitHub Actions](https://github.com/kuskoman/logstash-exporter/actions?query=branch%3Amaster+workflow%3A%22Go+application+CI%2FCD%22).
The latest successful build can be found under the `Go application CI/CD` workflow (already selected in the link).
To download the binary, simply go to the link location, click on the latest successful build, and download the binary
from the `Artifacts` section on the bottom of the page.
You are able to download artifact from any workflow run, not only master branch. To do that, go to
[GitHub Actions without master filter](https://github.com/kuskoman/logstash-exporter/actions?query=workflow%3A%22Go+application+CI%2FCD%22),
select the workflow run you want to download artifact from, and download the binary from the `Artifacts` section.
#### Docker Image
The Docker image is available under `kuskoman/logstash-exporter:<tag>`.
You can pull the image using the following command:
docker pull kuskoman/logstash-exporter:<tag>
You can browse tags on the [Docker Hub](https://hub.docker.com/r/kuskoman/logstash-exporter/tags).
The Docker image can be run using the following command:
docker run -d \
-p 9198:9198 \
-e LOGSTASH_URL=http://logstash:9600 \
kuskoman/logstash-exporter:<tag>
##### Unstable (master) image
The unstable version of the Docker image can be downloaded from the
[GitHub Container Registry](https://github.com/users/kuskoman/packages/container/package/logstash-exporter).
To pull the image from command line simply use:
docker pull ghcr.io/kuskoman/logstash-exporter:master
#### Helm Chart
The Helm chart has its own [README](./chart/README.md).
### Endpoints
- `/metrics`: Exposes metrics in Prometheus format.
- `/health`: Returns 200 if app runs properly.
### Configuration
The application can be configured using the following environment variables, which are also loaded from `.env` file:
| Variable Name | Description | Default Value |
| -------------- | --------------------------------------------- | ----------------------- |
| `LOGSTASH_URL` | URL to Logstash API | `http://localhost:9600` |
| `PORT` | Port on which the application will be exposed | `9198` |
| `HOST` | Host on which the application will be exposed | empty string |
All configuration variables can be checked in the [config directory](./config/).
## Building
### Makefile
#### Available Commands
<!--- GENERATED by ./scripts/add_descriptions_to_readme.sh --->
- `make all`: Builds binary executables for all OS (Win, Darwin, Linux).
- `make run`: Runs the Go Exporter application.
- `make build-linux`: Builds a binary executable for Linux.
- `make build-darwin`: Builds a binary executable for Darwin.
- `make build-windows`: Builds a binary executable for Windows.
- `make build-docker`: Builds a Docker image for the Go Exporter application.
- `make build-docker-multi`: Builds a multi-arch Docker image (`amd64` and `arm64`).
- `make clean`: Deletes all binary executables in the out directory.
- `make test`: Runs all tests.
- `make test-coverage`: Displays test coverage report.
- `make compose`: Starts a Docker-compose configuration.
- `make wait-for-compose`: Starts a Docker-compose configuration until it's ready.
- `make compose-down`: Stops a Docker-compose configuration.
- `make verify-metrics`: Verifies the metrics from the Go Exporter application.
- `make pull`: Pulls the Docker image from the registry.
- `make logs`: Shows logs from the Docker-compose configuration.
- `make minify`: Minifies the binary executables.
- `make install-helm-readme`: Installs readme-generator-for-helm tool.
- `make helm-readme`: Generates Helm chart README.md file.
- `make help`: Shows info about available commands.
<!--- **************************************************** --->
#### File Structure
The main Go Exporter application is located in the cmd/exporter/main.go file.
The binary executables are saved in the out directory.

"200": 87
}
}
}
]
},
"reloads": {
"last_failure_timestamp": "2023-04-20T20:00:32.437218256Z",
"successes": 3,
"failures": 1,
"last_success_timestamp": "2023-04-20T22:30:32.437218256Z",
"last_error": {
"message": "No configuration found in the configured sources.",
"backtrace": [
"org/logstash/execution/AbstractPipelineExt.java:151:in `reload_pipeline'",
"/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:181:in `block in reload_pipeline'",
"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"
]
}
},
"queue": {
"type": "memory",
"events_count": 0,
"queue_size_in_bytes": 0,
"max_queue_size_in_bytes": 0
},
"dead_letter_queue": {
"max_queue_size_in_bytes": 1073741824,
"last_error": "no errors",
"queue_size_in_bytes": 1,
"dropped_events": 0,
"expired_events": 0,
"storage_policy": "drop_newer"
},
"hash": "a73729cc9c29203931db21553c5edba063820a7e40d16cb5053be75cc3811a17",
"ephemeral_id": "a5c63d09-1ba6-4d67-90a5-075f468a7ab0"
},
".monitoring-logstash": {
"events": {
"out": 0,
"filtered": 0,
"in": 0,
"duration_in_millis": 0,
"queue_push_duration_in_millis": 0
},
"flow": {
"output_throughput": {
"current": 0.0,
"lifetime": 0.0
},
"worker_concurrency": {
"current": 0.0,
"lifetime": 0.0
},
"input_throughput": {
"current": 0.0,
"lifetime": 0.0
},
"filter_throughput": {
"current": 0.0,
"lifetime": 0.0
},
"queue_backpressure": {
"current": 0.0,
"lifetime": 0.0
}
},
"plugins": {
"inputs": [],
"codecs": [],
"filters": [],
"outputs": []
},
"reloads": {
"last_failure_timestamp": null,
"successes": 0,
"failures": 0,
"last_success_timestamp": null,
"last_error": null
},
"queue": null
}
},
"reloads": {
"successes": 0,
"failures": 0
},
"os": {
"cgroup": {
"cpu": {
"cfs_period_micros": 100000,
"cfs_quota_micros": -1,
"stat": {
"time_throttled_nanos": 0,
"number_of_times_throttled": 0,
"number_of_elapsed_periods": 0
},
"control_group": "/"
},
"cpuacct": {
"usage_nanos": 161531487900,
"control_group": "/"
}
}
},
"queue": {
"events_count": 0
}
}

var nodestats responses.NodeStatsResponse
err = json.Unmarshal(b, &nodestats)
if err != nil {
return nil, err
}
return &nodestats, nil
}
func (m *mockClient) GetNodeInfo(ctx context.Context) (*responses.NodeInfoResponse, error) {
return nil, nil
}
type errorMockClient struct{}
func (m *errorMockClient) GetNodeInfo(ctx context.Context) (*responses.NodeInfoResponse, error) {
return nil, nil
}
func (m *errorMockClient) GetNodeStats(ctx context.Context) (*responses.NodeStatsResponse, error) {
return nil, errors.New("could not connect to instance")
}
func TestCollectNotNil(t *testing.T) {
collector := NewNodestatsCollector(&mockClient{})
ch := make(chan prometheus.Metric)
ctx := context.Background()
go func() {
err := collector.Collect(ctx, ch)
if err != nil {
t.Errorf("Expected no error, got %v", err)
}
close(ch)
}()
expectedBaseMetrics := []string{
"logstash_stats_jvm_mem_heap_committed_bytes",
"logstash_stats_jvm_mem_heap_max_bytes",
"logstash_stats_jvm_mem_heap_used_bytes",
"logstash_stats_jvm_mem_heap_used_percent",
"logstash_stats_jvm_mem_non_heap_committed_bytes",
"logstash_stats_jvm_threads_count",
"logstash_stats_jvm_threads_peak_count",
"logstash_stats_jvm_uptime_millis",
"logstash_stats_pipeline_up",
"logstash_stats_pipeline_events_duration",
"logstash_stats_pipeline_events_filtered",
"logstash_stats_pipeline_events_in",
"logstash_stats_pipeline_events_out",
"logstash_stats_pipeline_events_queue_push_duration",
"logstash_stats_pipeline_queue_events_count",
"logstash_stats_pipeline_queue_events_queue_size",
"logstash_stats_pipeline_queue_max_size_in_bytes",
"logstash_stats_pipeline_reloads_failures",
"logstash_stats_pipeline_reloads_successes",
"logstash_stats_pipeline_reloads_last_success_timestamp",
"logstash_stats_pipeline_reloads_last_failure_timestamp",
"logstash_stats_pipeline_plugin_events_in",
"logstash_stats_pipeline_plugin_events_out",
"logstash_stats_pipeline_plugin_events_duration",
"logstash_stats_pipeline_plugin_events_queue_push_duration",
"logstash_stats_pipeline_plugin_documents_successes",
"logstash_stats_pipeline_plugin_documents_non_retryable_failures",
"logstash_stats_pipeline_plugin_bulk_requests_errors",
"logstash_stats_pipeline_plugin_bulk_requests_responses",
"logstash_stats_process_cpu_percent",
"logstash_stats_process_cpu_total_millis",
"logstash_stats_process_cpu_load_average_1m",
"logstash_stats_process_cpu_load_average_5m",
"logstash_stats_process_cpu_load_average_15m",
"logstash_stats_process_max_file_descriptors",
"logstash_stats_process_mem_total_virtual",
"logstash_stats_process_open_file_descriptors",
"logstash_stats_queue_events_count",
"logstash_stats_reload_failures",
"logstash_stats_reload_successes",
"logstash_stats_jvm_mem_pool_peak_used_bytes",
"logstash_stats_jvm_mem_pool_used_bytes",
"logstash_stats_jvm_mem_pool_peak_max_bytes",
"logstash_stats_jvm_mem_pool_max_bytes",
"logstash_stats_jvm_mem_pool_committed_bytes",
}
var foundMetrics []string
for metric := range ch {
if metric == nil {
t.Errorf("expected metric %s not to be nil", metric.Desc().String())
}
foundMetricDesc := metric.Desc().String()
foundMetricFqName, err := prometheus_helper.ExtractFqName(foundMetricDesc)
if err != nil {
t.Errorf("failed to extract fqName from metric %s", foundMetricDesc)
}
foundMetrics = append(foundMetrics, foundMetricFqName)
}
for _, expectedMetric := range expectedBaseMetrics {
found := false
for _, foundMetric := range foundMetrics {
if foundMetric == expectedMetric {
found = true
break
}
}
if !found {
t.Errorf("Expected metric %s to be found", expectedMetric)
}
}
}
func TestCollectError(t *testing.T) {
collector := NewNodestatsCollector(&errorMockClient{})
ctx, cancel := context.WithTimeout(context.Background(), time.Second)

logstash_exporter_build_info
logstash_info_build
logstash_info_node
logstash_info_pipeline_batch_delay
logstash_info_pipeline_batch_size
logstash_info_pipeline_workers
logstash_info_status
logstash_info_up
logstash_stats_jvm_mem_heap_committed_bytes
logstash_stats_jvm_mem_heap_max_bytes
logstash_stats_jvm_mem_heap_used_bytes
logstash_stats_jvm_mem_heap_used_percent
logstash_stats_jvm_mem_non_heap_committed_bytes
logstash_stats_jvm_threads_count
logstash_stats_jvm_threads_peak_count
logstash_stats_jvm_uptime_millis
logstash_stats_pipeline_up
logstash_stats_pipeline_events_duration
logstash_stats_pipeline_events_filtered
logstash_stats_pipeline_events_in
logstash_stats_pipeline_events_out
logstash_stats_pipeline_events_queue_push_duration
logstash_stats_pipeline_queue_events_count
logstash_stats_pipeline_queue_events_queue_size
logstash_stats_pipeline_queue_max_size_in_bytes
logstash_stats_pipeline_reloads_failures
logstash_stats_pipeline_reloads_successes
logstash_stats_pipeline_reloads_last_success_timestamp
logstash_stats_pipeline_reloads_last_failure_timestamp
logstash_stats_process_cpu_percent
logstash_stats_process_cpu_total_millis
logstash_stats_process_cpu_load_average_1m
logstash_stats_process_cpu_load_average_5m
logstash_stats_process_cpu_load_average_15m
logstash_stats_process_max_file_descriptors
logstash_stats_process_mem_total_virtual
logstash_stats_process_open_file_descriptors
logstash_stats_queue_events_count
logstash_stats_reload_failures
logstash_stats_reload_successes
logstash_stats_jvm_mem_pool_committed_bytes
logstash_stats_jvm_mem_pool_max_bytes
logstash_stats_jvm_mem_pool_peak_max_bytes
logstash_stats_jvm_mem_pool_peak_used_bytes
logstash_stats_jvm_mem_pool_used_bytes
logstash_stats_pipeline_plugin_events_duration
logstash_stats_pipeline_plugin_events_in
logstash_stats_pipeline_plugin_events_out
logstash_stats_pipeline_plugin_events_queue_push_duration
logstash_stats_pipeline_plugin_bulk_requests_errors
logstash_stats_pipeline_plugin_bulk_requests_responses
logstash_stats_pipeline_plugin_documents_non_retryable_failures
logstash_stats_pipeline_plugin_documents_successes
logstash_stats_pipeline_flow_filter_current
logstash_stats_pipeline_flow_filter_lifetime
logstash_stats_pipeline_flow_input_current
logstash_stats_pipeline_flow_input_lifetime
logstash_stats_pipeline_flow_output_current
logstash_stats_pipeline_flow_output_lifetime
logstash_stats_pipeline_flow_queue_backpressure_current
logstash_stats_pipeline_flow_queue_backpressure_lifetime
logstash_stats_pipeline_flow_worker_concurrency_current
logstash_stats_pipeline_flow_worker_concurrency_lifetime
logstash_stats_pipeline_dead_letter_queue_dropped_events
logstash_stats_pipeline_dead_letter_queue_expired_events
logstash_stats_pipeline_dead_letter_queue_max_size_in_bytes
logstash_stats_pipeline_dead_letter_queue_size_in_bytes
logstash_stats_events_duration_millis
logstash_stats_events_filtered
logstash_stats_events_in
logstash_stats_events_out
logstash_stats_events_queue_push_duration_millis
logstash_stats_flow_filter_current
logstash_stats_flow_filter_lifetime
logstash_stats_flow_input_current
logstash_stats_flow_input_lifetime
logstash_stats_flow_output_current
logstash_stats_flow_output_lifetime
logstash_stats_flow_queue_backpressure_current
logstash_stats_flow_queue_backpressure_lifetime
logstash_stats_flow_worker_concurrency_current
logstash_stats_flow_worker_concurrency_lifetime


Step 2: 🧐 Snippet Analysis

From looking through the relevant snippets, I decided to make the following modifications:

File Path Proposed Changes
docker-compose.yml Modify this file to include multiple services, each running a different version of Logstash. This will allow us to test the compatibility of the metrics with different versions simultaneously.
Makefile Add a new command that runs the script for generating the compatibility table. This will allow users to easily execute the script.
scripts/generate_compatibility_table.go Create this Go script to iterate over the different versions of Logstash set up in the development environment, check the compatibility of the metrics, and generate a markdown file with the compatibility table.
COMPATIBILITY.md This file will be generated by the script and will contain the compatibility table for the metrics across different versions of Logstash.

Step 3: 📝 Planning

I have created a plan for writing the pull request. I am now working my plan and coding the required changes to address this issue. Here is the planned pull request:

Automate generating compatibility table for different logstash version
sweep/automate-compatibility-table

Description

This PR automates the process of generating a compatibility table for different versions of Logstash. It includes the following changes:

  • Modified the docker-compose.yml file to include multiple services, each running a different version of Logstash. This allows for testing the compatibility of metrics with different Logstash versions simultaneously.
  • Added a new Go script, scripts/generate_compatibility_table.go, that iterates over the different Logstash versions in the development environment, checks the compatibility of metrics, and generates a markdown file with the compatibility table.
  • Created a new markdown file, COMPATIBILITY.md, which is generated by the script and contains the compatibility table for the metrics across different versions of Logstash.
  • Updated the Makefile to include a new command that runs the script for generating the compatibility table.

Summary of Changes

  • Modified docker-compose.yml to include multiple Logstash versions for testing.
  • Added scripts/generate_compatibility_table.go to automate the process of generating the compatibility table.
  • Created COMPATIBILITY.md to store the generated compatibility table.
  • Updated Makefile to include a new command for running the compatibility table generation script.

Step 4: ⌨️ Coding

File Instructions Progress
docker-compose.yml Modify this file to include multiple services, each running a different version of Logstash. This will allow us to test the compatibility of the metrics with different versions simultaneously. ✅ Commit 0674d4e
Makefile Add a new command that runs the script for generating the compatibility table. This will allow users to easily execute the script. ✅ Commit 0674d4e
scripts/generate_compatibility_table.go Create this Go script to iterate over the different versions of Logstash set up in the development environment, check the compatibility of the metrics, and generate a markdown file with the compatibility table. ✅ Commit d232491
COMPATIBILITY.md This file will be generated by the script and will contain the compatibility table for the metrics across different versions of Logstash. ✅ Commit c809706

Step 5: 🔁 Code Review

Here are the my self-reviews of my changes at sweep/automate-compatibility-table.

Here is the 1st review

Great work on the pull request! There's just one area that needs a bit of attention:

In the file scripts/generate_compatibility_table.go, the script assumes that all metrics are compatible if the HTTP status code is OK. This might not be the case in a real-world scenario.

I suggest revising the logic in lines 37-45 to accurately check for metric compatibility. You might need to parse the response body and check the availability of each metric.

Keep up the good work!

I finished incorporating these changes.


To recreate the pull request, leave a comment prefixed with "sweep:" or edit the issue.
Join Our Discord

@kuskoman
Copy link
Owner Author

Removing sweep labels, I will try to address this issue myself

@kuskoman kuskoman removed the sweep label Aug 12, 2023
@kuskoman kuskoman changed the title Sweep: Automate generating compatibility table for different logstash version Automate generating compatibility table for different logstash version Aug 12, 2023
@kuskoman
Copy link
Owner Author

Closing it for now, as I don't plan implementing it in nearest feature (see #161 for more info)
Feel free to reopen it if you plan to actually work on this issue

@kuskoman
Copy link
Owner Author

Duplicate of #88

@kuskoman kuskoman marked this as a duplicate of #88 Aug 12, 2023
@kuskoman kuskoman added the duplicate This issue or pull request already exists label Aug 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation duplicate This issue or pull request already exists github_actions Pull requests that update GitHub Actions code
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant