Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

search_threads_rebuild_limit sane default #1518

Open
ryneeverett opened this issue May 28, 2020 · 8 comments · May be fixed by #1519
Open

search_threads_rebuild_limit sane default #1518

ryneeverett opened this issue May 28, 2020 · 8 comments · May be fixed by #1519

Comments

@ryneeverett
Copy link
Contributor

Describe the bug
The default search_threads_rebuild_limit introduced in #1499 is infinite which "can be very slow in searches that yield thousands of results".

I don't much care what the limit is but infinite can't be a reasonable default. Maybe 1000?

Software Versions

  • Python version: 3.7.6
  • Notmuch version: [if relevant]
  • Alot version: master

To Reproduce
Steps to reproduce the behaviour:

  1. open search buffer
  2. open a thread
  3. untag the thread
  4. close the buffer
  5. refresh the search buffer
@pazz
Copy link
Owner

pazz commented May 28, 2020

You're probably right but I wouldn't know what default to take. This very much depends on the setup and user preferences. For me, 1000 and infinite is the same as I have virtually no threads with more than 100 messages.

@ryneeverett
Copy link
Contributor Author

This doesn't iterate over messages in a thread, it iterates over threads in a search buffer. While not an ideal situation, it's not uncommon to have thousands of emails in an inbox and this causes very long UI blocking.

@ryneeverett
Copy link
Contributor Author

Another thought is that a subtle UI notification could let the user know why the UI is blocking and point them toward this setting. Folks with low amounts of email and/or fast enough processors might not even notice the notification.

@ryneeverett
Copy link
Contributor Author

(I personally had to git bisect to figure out what the issue was.)

@pazz
Copy link
Owner

pazz commented May 28, 2020 via email

ryneeverett added a commit to ryneeverett/alot that referenced this issue May 30, 2020
Resolve pazz#1518. The problem that search_threads_rebuild_limit tries to
solve is UI responsiveness, so the relevant thing to limit is time, not
loops.

Next steps if this approach is accepted:
- Must: Deprecate search_threads_rebuild_limit -- Has a process been
developed for option deprecation/removal?
- Maybe: Add a configuration option for the timeout. Personally I don't
think this is necessary because unlike the previous approach this is not
relative to the user's processor and inbox size. It's always something
that can be added if a need arises.
- Maybe: Execute consume_pipe_until in a separate thread, sleep for the
timeout, and then kill the thread. This would eliminate the
wasteful clock checking every loop but I'm doubtful it's actually more
performant.
@meskio
Copy link
Contributor

meskio commented Jun 1, 2020

I see the same problem here, some search take really long to display. I think is even more serious having a lot of encrypted email, it does try to decrypt it to display the search buffer, and decrypting email is not always fast.

Setting up search_threads_rebuild_limit to 100 seems to improve the situation alot.

Thanks @ryneeverett to start the work on that.

@meskio
Copy link
Contributor

meskio commented Jun 1, 2020

I was wrong, the thing that improved the situation for me was to remove the hook I use to auto-refresh every buffer I switch to.

Refreshing keeps being pretty slow, even with search_threads_rebuild_limit set to 1. I'm trying to figure out where this slowness comes from. But I amazed how notmuch search tag:unread takes 0.02s but doing the same search in alot right now takes 100 times that.

@meskio
Copy link
Contributor

meskio commented Jun 1, 2020

I found out the source of my slowness. If I set thread_authors_replace_me to False it gets fast again. It looks like doing this author replacement does read from disk every email in the search query, mostly if there emails are encrypted this takes a while to load. This config flag seems to be around since a while, I'm not sure why I'm noticing this slowness now.

Anyway I guess my problem is not related to the topic of this issue, sorry for the noise here. I opened an issue for it #1520

ryneeverett added a commit to ryneeverett/alot that referenced this issue Dec 6, 2021
Resolve pazz#1518. The problem that search_threads_rebuild_limit tries to
solve is UI responsiveness, so the relevant thing to limit is time, not
loops.

Next steps if this approach is accepted:
- Must: Deprecate search_threads_rebuild_limit -- Has a process been
developed for option deprecation/removal?
- Maybe: Add a configuration option for the timeout. Personally I don't
think this is necessary because unlike the previous approach this is not
relative to the user's processor and inbox size. It's always something
that can be added if a need arises.
- Maybe: Execute consume_pipe_until in a separate thread, sleep for the
timeout, and then kill the thread. This would eliminate the
wasteful clock checking every loop but I'm doubtful it's actually more
performant.
ryneeverett added a commit to ryneeverett/alot that referenced this issue Jan 26, 2022
Resolve pazz#1518. The problem that search_threads_rebuild_limit tries to
solve is UI responsiveness, so the relevant thing to limit is time, not
loops.

Next steps if this approach is accepted:
- Must: Deprecate search_threads_rebuild_limit -- Has a process been
developed for option deprecation/removal?
- Maybe: Add a configuration option for the timeout. Personally I don't
think this is necessary because unlike the previous approach this is not
relative to the user's processor and inbox size. It's always something
that can be added if a need arises.
- Maybe: Execute consume_pipe_until in a separate thread, sleep for the
timeout, and then kill the thread. This would eliminate the
wasteful clock checking every loop but I'm doubtful it's actually more
performant.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants