-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reader is not waken up in case of immediate header flush with zero response body size #162
Comments
Looks like this also happens with non-empty replies as well. What we observe in real service is that it just gets stuck at second request in the keep-alive connection when there is nginx in front with slice-caching enabled (nginx sends multiple range requests in keep-alive connections). |
I was unable to get a repro test case with data being sent, and digging further narrowed this down to a bug in the service itself (Lwt_stream.t was never closed, which resulted in Body.t to never be closed as well). Sorry for the noise, symptoms were very similar to this issue. |
@seliopou have you had a chance to look at the test cases? Do they make sense? |
I'm taking a look at this now (sorry for the delay!) and I think I have my head wrapped around what's happening. That said, it's hard to tell if this is an issue with the tests or not. For starters, I changed your first test slightly, to look like this: reader_ready t;
yield_writer t (fun () -> writer_woken_up := true);
read_string t "HEAD / HTTP/1.1\r\nHost: example.com\r\n\r\n";
yield_reader t (fun () -> reader_woken_up := true);
!continue_response ();
Alcotest.(check bool) "Writer woken up"
true !writer_woken_up;
write_response t ~body:"" response;
writer_yielded t;
Alcotest.(check bool) "Reader woken up"
true !reader_woken_up; and it worked just fine. Note that I pulled the call to
Note that we haven't called Part of the problem here is that we are updating state in our Out of curiosity, are you using your own driver as opposed to the provided one of lwt/async drivers? Or did you just try to boil an observed bug down to a simple test? |
We do use custom Lwt driver, as we needed TLS support. And this test case is a distilled repro case of what we observed in real service. Actually @anmonteiro has fixed this issue in his fork (but I have not tried it unfortunately, so I can't confirm the resolution on real world service), I'm not that strong on httpaf internals, but probably it's worth joining the efforts and merge the fix to httpaf if possible. |
Hey @dpatti @seliopou we've observed this issue in one of our to-be-production services again, and Are there any concerns regarding upstreaming the fix? |
We'll need to test our services against latest master to see if it's stable. I'll update once we have a time slot for this. You can close this issue if you're certain that #181 fixes this, I'll reopen in case we run into this after an upgrade. |
First test in the below snippet demonstrates the problem, while the second test passes just fine:
Reproduces with 0.6.5 and 0.6.4. Subsequent request on keep-alive connection gets stuck.
The text was updated successfully, but these errors were encountered: