-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix some issues with automatic gzip decompression #861
Conversation
This plugs a buffer stream in between the decompressor stream and the "user" stream. This make sure that (i) the correct number of bytes is read from the http stream and thus decoded (fixes #859) and (ii) that we can read the http stream in chunks instead of byte-by-byte (the previous code even warns about this usage). Fixes #859.
Codecov Report
@@ Coverage Diff @@
## master #861 +/- ##
==========================================
+ Coverage 79.95% 80.00% +0.04%
==========================================
Files 36 36
Lines 2849 2860 +11
==========================================
+ Hits 2278 2288 +10
- Misses 571 572 +1
📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more |
# at eof. | ||
buf = BufferStream() | ||
gzstream = GzipDecompressorStream(buf) | ||
tsk = @async begin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain why the @async
+ wait
later? Like, why can't do something like write(res.body, GzipDecompressorStream(BufferStream(stream)))
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no method BufferStream(stream)
.
The task is needed since we otherwise have two blocking calls (write
to the decompressor stream and read
from the buffer) and we can't make progress. Perhaps there is a better way, but the code in this patch is how I have done things manually when passing response_stream
before this was built into HTTP.jl at least.
@@ -1,7 +1,7 @@ | |||
name = "HTTP" | |||
uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3" | |||
authors = ["Jacob Quinn", "contributors: https://github.com/JuliaWeb/HTTP.jl/graphs/contributors"] | |||
version = "1.0.1" | |||
version = "1.0.2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we allowed to do a patch release when adding a new dependency? Or does that require a minor bump?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see why fixing a bug by using a dependency would be problematic.
This plugs a buffer stream in between the decompressor stream and the
"user" stream. This make sure that (i) the correct number of bytes is
read from the http stream and thus decoded (fixes #859) and (ii) that we
can read the http stream in chunks instead of byte-by-byte (the previous
code even warns about this usage).
Fixes #859.