Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Stack Monitoring] Add I/O metrics for Elasticsearch #45870

Merged
merged 14 commits into from
Sep 19, 2019

Conversation

cachedout
Copy link
Contributor

Summary

Add I/O metrics for Elasticsearch to Stack Monitoring. This metric tracks I/O operations/sec for the system on which Elasticsearch is running.

The following metric is now displayed on node overview pages:

io_1

On certain platforms, such as OS X, Elasticsearch does not collect I/O metrics. To account for this, a warning is displayed in the tool tip:

io_2

Checklist

Use strikethroughs to remove checklist items you don't feel are applicable to this PR.

For maintainers

@cachedout
Copy link
Contributor Author

@elasticmachine merge upstream

@elasticmachine
Copy link
Contributor

💔 Build Failed

Copy link
Contributor

@igoristic igoristic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job! Looks good 👍 Approved, so long as all the checks pass

@chrisronline
Copy link
Contributor

I don't recall any precedence for this, but is it better for users if we do not bother displaying this chart if the node is running on an unsupported platform (more specifically, if the ES data does not exist)?

@cachedout
Copy link
Contributor Author

cachedout commented Sep 17, 2019

@chrisronline Ultimately, yes, that would be better. I went this route for two reasons.

  1. Progress over perfection. We can get this in right now and work back on the task of selectively displaying the metric at a later date.

  2. My strong suspicion (though we should probably back this up with telemetry data) is that the vast majority of Elasticsearch users are on a platform that does collect this data and therefore this wouldn't be an issue.

@chrisronline
Copy link
Contributor

Sounds good. I do think that showing a chart that we know will be empty is not a great UX, but we don't have to address it right now.

Also, there are missing labels on this PR

@cachedout
Copy link
Contributor Author

I do think that showing a chart that we know will be empty is not a great UX, but we don't have to address it right now.

@chrisronline Just so there's no confusion -- I agree with this. In thinking this through, I think there are a few ways in which we'll want to filter/display the metrics that are shown in the long run. For example, I think we'll soon encounter a need to show different metrics based on the role a node has in a cluster. So, I think it's best dealt with in a change that ensures that we account for all of that. I'll create a new issue shortly that expands on my thinking in that regard.

@elasticmachine
Copy link
Contributor

💔 Build Failed

@cachedout cachedout changed the title Add io metrics [Stack Monitoring] Add I/O metrics for Elasticsearch Sep 19, 2019
@elasticmachine
Copy link
Contributor

💚 Build Succeeded

@cachedout cachedout merged commit 629623a into elastic:master Sep 19, 2019
@cachedout cachedout deleted the add_io_metrics branch September 19, 2019 09:57
cachedout added a commit to cachedout/kibana that referenced this pull request Sep 30, 2019
* I/O metrics POC

* Gather correct metric for total

* Remove io stats from advanced

* Move io stats to node overview page

* Add new io metrics

* Add new io metrics

* Add note about supported platforms

* Update snapshot

* Add warning about platforms to all metrics

* Another snapshot update

* Update type and units

* Remove errant trailing comma

* Snapshot update for new types

* Add node_io integration testing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants