-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Receiving zero byte chunks under certain conditions #3525
Comments
GitMate.io thinks the contributor most likely able to help you is @asvetlov. Possibly related issues are #1428 (Method "read_chunk" of "BodyPartReader" returns zero bytes before eof), #1777 (parse_frame receives a 1 byte buf when using proxy and fails to parse header), #1615 (Chunk size is deprecated), #3281 (aiohttp client corrupts uploads that use chunked encoding due to race condition in socket reuse), and #1814 (Close websocket connection when pong not received). |
@socketpair it looks close to your last fix, isn't it? |
Yes, maybe. @marconfus please tell exact aiohttp version. It will be very hard to debug if I can't reproduce the bug. It will be nice if you can give me traffic dump of failed request/response with unencrypted traffic. |
@mnach |
Version is 3.5.3 installed via pip. Is there a way to create a dump with aiohttp (after decryption)? |
I can reproduce it with a simple setup: Client-side with CentOS 7 or MacOS 10.13 both OpenSSL 1.0.2x As before, an up-to-date alpine system is fine.
|
Yes, I have reproduced that. Unfortunatelly, I can't fix it easily :( Sequence of events:
It looks like we may track size of the chunk and don't trigger your app if it is a last subchunk of HTTP chunk. But actually it is not so. Because for compressed payload we can not track size of a chunk -- it is the size of compressed data, but we receive from nginx parser uncompressed data. Possible solution: to make a hack with tracking size of a chunk when no compression involved. For compressed payload we can do nothing, and so behavior will be the same. You may ask, if so, why previous implementation did not trigger all that ? I will answer: it had the bugs proven with testcases. For example, for your case it did not report event "end of chunk", so applications that use etcd watchers could not distinguish border of the messages. So, if HTTP chunk borders are not in your interest, I advice you to ignore chunks of zero lengths, or, just use #!/usr/bin/python3
import asyncio
import aiohttp
import ssl
INTERNALURL='https://jira.marconfus.org/test'
async def main():
async with aiohttp.ClientSession() as session:
async with session.get(url=INTERNALURL, timeout=None, ssl=False) as resp:
buffer = b""
async for raw_data in resp.content.iter_any():
print('chunk received', len(raw_data))
buffer += raw_data
print("len(buffer)", len(buffer))
print(ssl.OPENSSL_VERSION)
loop = asyncio.get_event_loop()
loop.run_until_complete(main()) UPD! URGENT! IMPORTANT @asvetlov |
@marconfus thanks for providing the way to trigger bug. |
(cherry picked from commit 5c4cb82) Co-authored-by: Коренберг Марк <[email protected]>
After merging my latest urgent changes, example given by me in earlier message (iter_any) will work. @asvetlov. Without ones, it works, but yields only one chunk |
…3528) (cherry picked from commit 5c4cb82) Co-authored-by: Коренберг Марк <[email protected]>
The fix seems to work fine. Thanks for the fast response! |
…) (#3560) (cherry picked from commit c3f494f) Co-authored-by: Коренберг Марк <[email protected]>
…) (#3560) (#3565) (cherry picked from commit c3f494f) Co-authored-by: Коренберг Марк <[email protected]>
Long story short
When running under "Red Hat Enterprise Linux Server release 7.5" the client only gets a partial result from an https URL using chunked transfer encoding.
In another ticket I found the following testcode:
Expected behaviour
Output of Python 3.6.6 in an up-to-date alpine docker container:
Actual behaviour
Ouput under RHEL7.5 with Python 3.6.3:
The amount and position of zero byte length chunks is sometimes varying from call to call.
Of course, awaiting response.text() I get only 8184 (or a multiple) bytes of content instead of the expected 111883.
Calling the same URL over http I get varying chunk sizes, but never a zero length one before the end, so everything is good.
Using the Requests library in the same environment using the same https URL I also get the correct result.
Steps to reproduce
Unfortunately the URL is not publicly reachable. It's a webservice with many clients on different platforms (curl, wget, Java, Firefox) and noone else has the problem.
Your environment
aiohttp 3.5.3 (client)
The only obvious difference between the working/nonworking environment is the ssl library.
Is there anything else I can contribute to find the error?
The text was updated successfully, but these errors were encountered: