Skip to content
This repository has been archived by the owner on Dec 15, 2022. It is now read-only.

Cache readable item keystroke representations #97

Merged
merged 1 commit into from
Dec 4, 2017

Conversation

jarle
Copy link
Contributor

@jarle jarle commented Nov 16, 2017

This PR implements caching of the "humanized" keystrokes for each item. Previously the keystroke representation would be recalculated whenever the query changed, which would cause a slight delay when typing.

@t9md
Copy link
Contributor

t9md commented Nov 16, 2017

I checked. behavior, especially to see how UX is improved.

I even merged two solution(this PR and #98) and make it switchable by command to evaluate many time quickly.

Then my impression is, UX is improved this PR, but #98 is far snappier against initial keystroke on initial-open.
Plus, I can notice stepped-focus on initial-open like

  1. step-1 open mini-editor, at this point focus-indicating blue border was not shown.
  2. step-2 focus blue border shown.
    Maybe difficult to see from GIF bellow, I'm saying there is time-gap between step-1. and step-2.
    But Dyamic max results #98 have not this time-gap(less confusing in UX).

2step-focus

And I feel some strange(not snappy) feeling while rapidly typing.
Maybe this PR is just focus-in-the-middle-of-rendering. And rendering processing still continue once it gives focus to mini-editor.

Hope you compare two approach with attempting realistic situation (invoke real command immediately after atom window opened).

@leroix
Copy link
Contributor

leroix commented Nov 16, 2017

Wow, nice work! I love seeing these kinds of performance improvements. I think @t9md has a point. By deferring the rendering of the additional commands, we potentially interfere with keystrokes that may follow. It's easy to see in the timeline tab. Expand interaction events, I bet you'll see rendering overlapping with key press events.

@jarle
Copy link
Contributor Author

jarle commented Nov 16, 2017

Thanks for the tip @leroix! I have tested while looking at the interactions, and there are some overlap, but the overlap does not seem to be introduced in this PR based on my comparisons to the master branch.

This makes sense because the time from the palette is visible until all items are loaded is initially around 200ms(around avg human reaction time), which is about as fast as I can move my hands from ctrl shift p to any other position on the keyboard 😄 All subsequent toggles clock in at around 70ms, so I think it's only the initial one that is a candidate for interfering with typing.

The typing experience from this PR should be on par with the master branch, but typing in #98 surpasses both because of the low number of items. There are some issues however, most notably the delay when pressing the down arrow after the initial toggle. Then all rendering is done up front when the user wants to perform an action, which can probably feel frustrating.

@jarle
Copy link
Contributor Author

jarle commented Nov 18, 2017

@t9md @nathansobo I have tracked down the cause of the weird feeling when typing down to the generation of humanized keybindings(the blue boxes to the right in the palette). I've fixed it in this PR, and I now believe this PR is the best solution to #80.

I've made a recording to show the performance where I go through several scenarios. There are 700 commands registered(starts with total refresh to flush the cache): https://i.imgur.com/NAMx0ME.gifv

@leroix
Copy link
Contributor

leroix commented Nov 18, 2017

@jarle I checked out your branch and compared it to the current master. While your solution does cause the command palette to appear faster, it actually increases the time to fully load the list of commands by about 50-60ms on the initial load.

Current master:
screen shot 2017-11-18 at 2 09 53 pm

This branch:
screen shot 2017-11-18 at 2 11 37 pm

This is mostly due to the rather large reflow in scrollIntoViewIfNeeded.

@jarle
Copy link
Contributor Author

jarle commented Nov 18, 2017

@leroix The time to fully load the list increases, but I would argue that the perceived delay from the user's perspective becomes much shorter. This is because, after the initial loading of the palette, the user will be able to reach for the keys they want to press in parallell with the loading of the rest of the items. I measured how long it took for me to start typing in both master and this branch:

Master:
screenshot_20171118_235821

This branch:
screenshot_20171119_000018

Because the palette appear earlier, I start typing earlier. But I'm unable to move my hands in to position fast enough to interfere with the loading of the last items. And I like to think I'm not a 🐢 with the keyboard 😄

The reflow caused by scrollIntoView is of similar length to the reflow caused by focus in the master branch, so I guess it's neither better nor worse in that regards. I think that invoking scrollIntoView(from atom-select-list) is superfluous in the initial toggle, so that is potentially a change that can be made in that component.

@leroix
Copy link
Contributor

leroix commented Nov 26, 2017

@jarle I think caching the key binding elements is a great addition, but I think incrementally adding elements will probably be covered by #101. Would you be willing to focus this PR on the key binding element caching?

@jarle
Copy link
Contributor Author

jarle commented Nov 26, 2017

@leroix I agree that #101 is a really good solution to the problem, so I have changed the focus of this PR. Generating the string representation of the keystroke was the bottleneck when typing, so only that string representation is cached(not the element itself).

@jarle jarle changed the title Incrementally add items for faster launch Cache readable item keystroke representations Nov 26, 2017
@t9md
Copy link
Contributor

t9md commented Nov 27, 2017

@jarle

Let me ask.

As my understanding, your caching PR #94 is to avoid recreate item element on keyboard navigation(selectNext, selectPrevious etc).
Is that still have some benefit after atom/atom-select-list#22 and #101 get merged?
Why I'm asking this is I could not find noticeable diff with and without cache feature.

So also want to know this PR also introduce noticeable performance improve?
I want to understand real situation this and #94 enhancement shines after atom/atom-select-list#22 and #101 get merged.
Since after atom/atom-select-list#22 get merged, amount of element computation greatly reduced, so heavy computation might be acceptable is total amount of request is small enough.

@jarle
Copy link
Contributor Author

jarle commented Nov 27, 2017

@t9md #94 causes subsequent calls to elementForItem to always be faster, so it's a simple way of improving performance without any major drawbacks. This PR (#97) aims to improve the responsiveness when the query changes, because elementForItem will reuse the computed "humanized" keystroke which otherwise is a bottleneck.

These changes provide very noticeable improvements on the current version of command palette, but I understand that the improvements probably won't be as obvious after #101 and atom/atom-select-list#22 are merged. With that said, I don't see any downsides in keeping the cache for now.

@leroix
Copy link
Contributor

leroix commented Dec 3, 2017

@jarle can you merge or rebase this with the latest master just to make sure the tests still pass?

@jarle
Copy link
Contributor Author

jarle commented Dec 3, 2017

@leroix Squashed and rebased!

@leroix
Copy link
Contributor

leroix commented Dec 4, 2017

Thanks!

@leroix leroix merged commit 2b7db27 into atom:master Dec 4, 2017
@t9md t9md mentioned this pull request Feb 25, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants