Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Explain Log Rate Spikes: Progress bar stuck at 0% then jumps to 80% #138972

Closed
Tracked by #138117
walterra opened this issue Aug 17, 2022 · 5 comments
Closed
Tracked by #138117
Assignees
Labels
bug Fixes for quality problems that affect the customer experience Feature:ML/AIOps ML AIOps features: Change Point Detection, Log Pattern Analysis, Log Rate Analysis :ml v8.5.0

Comments

@walterra
Copy link
Contributor

walterra commented Aug 17, 2022

Kibana version: 8.4.0 BC

Describe the bug:

Esp. on cloud, once a user starts the analysis, the progress bar is stuck for a while at 0% —. It's not obvious that the analysis is running and there's no "loading state message" after the .

Expected behavior:

The progress bar should have a "loading state message" and progress should happen earlier.

@walterra walterra added bug Fixes for quality problems that affect the customer experience :ml v8.5.0 labels Aug 17, 2022
@elasticmachine
Copy link
Contributor

Pinging @elastic/ml-ui (:ml)

@walterra walterra added the Feature:ML/AIOps ML AIOps features: Change Point Detection, Log Pattern Analysis, Log Rate Analysis label Aug 17, 2022
@walterra
Copy link
Contributor Author

walterra commented Aug 29, 2022

This is most likely related to the proxy used in cloud. It looks like the proxy is buffering the first 4KB and only then starts streaming. It replaces the Transfer-Encoding: chunked header with Content-Length: ... for responses smaller than 4KB. While further investigation is ongoing, a short term fix for this could be to send 4KB of dummy data first to trigger proper streaming.

@walterra
Copy link
Contributor Author

Experiments done in #139637 surfaced that every time the proxy buffers up to 4KB after a flush, not just on the first 4KB.

@vadimkibana
Copy link
Contributor

@walterra
Copy link
Contributor Author

Closing this one, created a follow up issue to keep track of removing the workaround once proxies support smaller payloads: #141194.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience Feature:ML/AIOps ML AIOps features: Change Point Detection, Log Pattern Analysis, Log Rate Analysis :ml v8.5.0
Projects
None yet
Development

No branches or pull requests

3 participants