-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High CPU overhead when polling #560
Comments
Hello, You appear to be using poll with a small duration in a synchronous code, isn't cpu usage expected to be high in this case? I think a more correct solution would be to either increase polling duration or use async (example https://github.com/crossterm-rs/crossterm/blob/master/examples/event-stream-tokio.rs) |
Also since you're reading events in a different threads what is the purpose of polling?, does simply blocking for read in that thread doesn't work? Edit: I see you need that for the pausing feature, I guess async is the way to go |
The example uses the tokio runtime. I don't want to complicate things by introducing async runtime. Is there a way I can keep the asyncness isolated only to input reading? |
I think I can just not declare every function as async |
I think for async you will need to rewrite the codebase I faced this before the trick that worked for me before is making the main thread controls when crossterm reads (std thread park after each read and the main thread unparks the crossterm thread when needed) but its definitly a hack I guess we either need a solution to #397 or using an async codebase in these cases is the right answer |
Got it. Closing this. |
For future readers, another simpler approach is to increase the sleep time when the key is not being held (example). But of course, this needs to be tested to see how it impacts the responsiveness of the program. |
Describe the bug
Not sure if this is the real cause, or I'm doing something wrong, but any hint on how to optimize this polling overhead issue would be appreciated.
To Reproduce
Steps to reproduce the behavior:
git clone https://github.com/sayanarijit/xplr
cargo flamegraph --bin xplr
cargo flamegraph --bin xplr
Expected behavior
The CPU should not to be too high, yet it gets pretty high. Increasing the timeout makes the app less responsive. Not sure how
termion
does it, but I haven't faced CPU issue withtermion
even with async read.OS
Manjaro latest
Terminal/Console
Alacritty
The text was updated successfully, but these errors were encountered: