Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow LSP to process multiple messages per poll #89284

Merged
merged 1 commit into from
Mar 8, 2024

Conversation

mihe
Copy link
Contributor

@mihe mihe commented Mar 8, 2024

When using GDScript from an external editor that makes use of Godot's LSP server you can currently encounter an issue where things like code completion and error squigglies lag behind quite a bit, which looks something like this:

Code_RmM2G5N7qs.mp4

The problem seem to be linked to Godot's low-processing mode, as workarounds consist of either enabling interface/editor/update_continuously or lowering interface/editor/unfocused_low_processor_mode_sleep_usec. When using the VS Code extension you can also work around this by setting its godotTools.lsp.headless setting to true, which launches a completely separate instance of Godot for the sole purpose of acting as a headless LSP.

The reason for this seems to fall on the Godot side of things, namely that the LSP implementation is currently set to only ever process a single message per poll. This meant that if you in your text editor typed faster than the 100 ms that unfocused_low_processor_mode_sleep_usec is set to by default you would start stacking up messages that would only ever get popped once every 100 ms, thereby building up more and more latency.

This PR resolves this by changing both GDScriptLanguageProtocol::LSPeer::handle_data and GDScriptLanguageProtocol::LSPeer::send_data to be done within a while instead of within an if, thereby allowing multiple messages to be processed per poll.

@mihe mihe requested a review from a team as a code owner March 8, 2024 14:53
@AThousandShips AThousandShips added this to the 4.x milestone Mar 8, 2024
@mihe mihe force-pushed the lsp-multiple-messages branch from bb99160 to 9efd572 Compare March 8, 2024 15:07
@mihe
Copy link
Contributor Author

mihe commented Mar 8, 2024

Might potentially fix #69914 and #87410.

@akien-mga
Copy link
Member

CC @Calinou @DaelonSuzuka @rcorre

Copy link
Member

@akien-mga akien-mga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code changes look good to me.

Only question is whether there's a risk of having too big a queue which could cause frame drops of the Godot process due to the while loop? If so, do we need some threshold to process a reasonable amount of messages per frame instead of all of them?

@DaelonSuzuka
Copy link

DaelonSuzuka commented Mar 8, 2024

This would explain a lot of the behavior I've seen. It seems possible that this is also a root cause of other Language Server issues like the "class name already defined somewhere else" bug. There could be race conditions between processing the incoming queue of messages vs files changing on disk/the internal representation of the project.

I'm grabbing the build artifact to test with VSCode and I'll report back in a couple minutes.

Slightly off topic, but I was recently told that the LSP respects the Godot Editor settings "Idle Parse Delay" and "Code Complete Delay". I tested these non-empirically and it seemed to make things snappier.

(https://old.reddit.com/r/godot/comments/1axv5xv/today_i_said_goodbye_to_coding_inside_godot_and/krsv5x1/?context=3)

@mihe
Copy link
Contributor Author

mihe commented Mar 8, 2024

Only question is whether there's a risk of having too big a queue which could cause frame drops of the Godot process due to the while loop? If so, do we need some threshold to process a reasonable amount of messages per frame instead of all of them?

@akien-mga Yeah, I was debating whether to add some arbitrary message limit, but frankly I can't even guesstimate what that limit should be. It could be that 1000 messages can be processed in as "little" as a millisecond, and that drawing the line there would significantly hamper some editor somewhere.

I could perhaps add a time limit instead, but again, I'm not sure what the limit should be.

Seeing as how you're more or less guaranteed to be under the influence of unfocused_low_processor_mode_sleep_usec (and its likely default of 100 ms) when interacting with an external editor the alotted time could be as high as dozens of millisecond I guess.

@DaelonSuzuka
Copy link

DaelonSuzuka commented Mar 8, 2024

After some testing I can confirm that this PR makes a big difference from the LSP client side.

The initial connection usually has a bunch of these textDocument/publishDiagnostics messages, and with the old behavior that causes every handshake to take an extra ~3 seconds because useful work can be done.

Screenshot

image

With this PR the handshake is resolved immediately and it's ready to serve requests as soon as it's connected.

Thanks @mihe, this is going to be a real quality of life improvement!

@mihe
Copy link
Contributor Author

mihe commented Mar 8, 2024

Glad to hear it, @DaelonSuzuka.

I'll give the time limit thing a try. Making the limit an editor setting with a default of 100 ms seems better than no limit at all and should obviously be more than plenty for anything but a misbehaving LSP client.

@DaelonSuzuka
Copy link

DaelonSuzuka commented Mar 8, 2024

Just based on the traffic I've observed on the VSCode side, a simple limit of processing 25-50 messages per frame should be more than sufficient. Usual usage is a handful of messages per second. A couple of features, like our new inlay hints resolver, can sometimes process an entire document and generate maybe... 100 hover requests at once? If that batch is resolved over 2 or 3 frames that's going to be totally fine.

@mihe mihe force-pushed the lsp-multiple-messages branch from 9efd572 to d25b54b Compare March 8, 2024 17:33
@mihe
Copy link
Contributor Author

mihe commented Mar 8, 2024

It wasn't that much code to have a time limit instead, so I decided to go for it.

For simplicity's sake I chose to only limit the handle_data part of it though, and not the send_data part. The send_data part is just socket work anyway, and is directly proportional to the amount of messages pushed to the queue in handle_data, so it should hopefully be Good Enough™.

EDIT: Oh, and it might be worth mentioning that the limit currently allows for at least one message per LSP client to be processed, even if it exceeds the time limit. I figured that might be a good thing.

@mihe mihe force-pushed the lsp-multiple-messages branch from d25b54b to e248504 Compare March 8, 2024 17:49
@akien-mga akien-mga modified the milestones: 4.x, 4.3 Mar 8, 2024
@akien-mga akien-mga merged commit 877cd12 into godotengine:master Mar 8, 2024
16 checks passed
@akien-mga
Copy link
Member

Thanks!

@DaelonSuzuka
Copy link

Does this limitation exist in 3.x too? (And is the fix cherry-pickable?)

@akien-mga akien-mga added cherrypick:3.x Considered for cherry-picking into a future 3.x release cherrypick:3.5 Considered for cherry-picking into a future 3.5.x release cherrypick:4.1 Considered for cherry-picking into a future 4.1.x release cherrypick:4.2 Considered for cherry-picking into a future 4.2.x release labels Mar 9, 2024
@mihe mihe deleted the lsp-multiple-messages branch March 9, 2024 01:56
@mihe
Copy link
Contributor Author

mihe commented Mar 9, 2024

@akien-mga Even without confirmation from the issue authors I'd say it's safe to close #69914 and #87410.

The fact that #69914 sees a slight improvement when using network/language_server/use_thread means that the message handling rate is almost certainly the issue, as using the separate thread would have effectively lowered the message popping from 100 ms to the 50 ms tick rate of the thread.

The repro videos in #87410 show pretty much the exact issue as shown in the video in this PR's description.

@akien-mga
Copy link
Member

Cherry-picked for 4.2.2.
Cherry-picked for 4.1.4.

@akien-mga akien-mga removed cherrypick:4.1 Considered for cherry-picking into a future 4.1.x release cherrypick:4.2 Considered for cherry-picking into a future 4.2.x release labels Mar 9, 2024
@AThousandShips AThousandShips removed cherrypick:3.x Considered for cherry-picking into a future 3.x release cherrypick:3.5 Considered for cherry-picking into a future 3.5.x release labels Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants