-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConnectionResetError: Cannot write to closing transport #27
Comments
Your code works perfectly with alive socks. Try using another proxy |
Another possible reason is that google added your socks to the blacklist. Try another target resource too |
I'm getting the error above no matter what proxy I use, the proxies are verified to be working by using Here's my
|
Logs from proxy server when running the script above
|
I tested your code with different types of free proxies from here http://free-proxy.cz. This works perfectly for me. Please provide the simplest code sample that fails with any proxy. |
I'm using the exact same example code, the same error is returned from any proxy. Any ideas on how to proceed from here? |
I have no idea. Your code works correctly for me with any proxy. |
У меня такая же проблема, прокси рабочие, но вот конкретно через эту библиотеку не работают. Хз куда копать |
This error occurs when accessing some sites via |
@khoben Are you using CPython 3.11.5? It seems to be a regression in 3.11.5. import asyncio
import aiohttp
from aiohttp_socks import ProxyConnector
async def fetch(url):
connector = ProxyConnector.from_url('socks5://127.0.0.1:1080')
async with aiohttp.ClientSession(connector=connector) as session:
async with session.get(url) as response:
return f'{response.status} {response.reason}'
if __name__ == '__main__':
print(asyncio.run(fetch('http://example.com'))) $ docker run --rm -v $PWD/test.py:/test.py --network host python:3.11.4-slim sh -c 'pip install -qq aiohttp_socks; python test.py'
200 OK
$ docker run --rm -v $PWD/test.py:/test.py --network host python:3.11.5-slim sh -c 'pip install -qq aiohttp_socks; python test.py'
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 558, in _request
resp = await req.send(conn)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 670, in send
await writer.write_headers(status_line, self.headers)
File "/usr/local/lib/python3.11/site-packages/aiohttp/http_writer.py", line 130, in write_headers
self._write(buf)
File "/usr/local/lib/python3.11/site-packages/aiohttp/http_writer.py", line 75, in _write
raise ConnectionResetError("Cannot write to closing transport")
ConnectionResetError: Cannot write to closing transport
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "//test.py", line 15, in <module>
print(asyncio.run(fetch('http://example.com')))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "//test.py", line 10, in fetch
async with session.get(url) as response:
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 1141, in __aenter__
self._resp = await self._coro
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 572, in _request
raise ClientOSError(*exc.args) from exc
aiohttp.client_exceptions.ClientOSError: Cannot write to closing transport Not sure if python/cpython#107913 has fixed the regression. I've not done any tests yet. @romis2012 Could you reopen the issue until the next CPython release? So that others could find it easily. Or I can open a new one if you'd prefer. |
Until the latest commit on the Before we can report the issue to CPython, we need to figure out a minimal reproducer, @romis2012, could you help? |
I can't reproduce this issue, as I wrote above |
Yes, it looks like it broke after August 26 when the docker container with the base image |
Even on CPython 3.11.5? That's quite weird. I've reproduced the issue on both my PC (Debian unstable) and my VPS (Debian bookworm), with both my own socks5 proxy and proxies from http://free-proxy.cz/. The issue is only reproducible on CPython 3.11.5, 3.11.0~3.11.4 work fine (test.py: #27 (comment)): $ for patch in {0..5}; do docker run --rm -v $PWD/test.py:/test.py --network host python:3.11.${patch}-slim sh -c 'pip install -qq aiohttp_socks; python test.py'; done
200 OK
200 OK
200 OK
200 OK
200 OK
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 558, in _request
resp = await req.send(conn)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 670, in send
await writer.write_headers(status_line, self.headers)
File "/usr/local/lib/python3.11/site-packages/aiohttp/http_writer.py", line 130, in write_headers
self._write(buf)
File "/usr/local/lib/python3.11/site-packages/aiohttp/http_writer.py", line 75, in _write
raise ConnectionResetError("Cannot write to closing transport")
ConnectionResetError: Cannot write to closing transport
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "//test.py", line 15, in <module>
print(asyncio.run(fetch('http://example.com')))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "//test.py", line 10, in fetch
async with session.get(url) as response:
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 1141, in __aenter__
self._resp = await self._coro
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 572, in _request
raise ClientOSError(*exc.args) from exc
aiohttp.client_exceptions.ClientOSError: Cannot write to closing transport |
Oh, yes, on Python 3.11.5 I have the same error |
Fixed in v0.8.1 |
The fix in v0.8.1 may lead to a memory leak.
import asyncio
import aiohttp
from aiohttp_socks import ProxyConnector
async def main():
connector = ProxyConnector.from_url('socks5://127.0.0.1:1080')
async with aiohttp.ClientSession(connector=connector) as session:
for i in range(10):
if connector._streams:
# the connection to the proxy server is lost somehow
# here we simulate the situation by manually closing the writer
connector._streams[-1].writer.close()
try:
async with session.get('http://example.com') as response:
print(response.status)
except Exception as e:
print(e)
print(len(connector._streams)) # 10
print(len(connector._streams)) # 10
if __name__ == '__main__':
asyncio.run(main()) |
The fix in e1541cc is somehow infectious... During the lifecycle of a proxied session, the fix in python/cpython#107836 is discarded, making TLS connections leak again. Moreover, the connection to the proxy server is also leaked for about 30 seconds (maybe it is just closed by my proxy server instead of being garbage collected?). Both leakages were not observed in v0.8.1. (The script is partly taken from python/cpython#106684, thx) import os
import asyncio
import gc
import signal
import aiohttp
from aiohttp_socks import ProxyConnector
HOST = "cloudflare.com" # will keep the connection alive for a few minutes at least
PROXY = 'socks5://127.0.0.1:1080'
BUF = ''
TIMES = 0
async def query():
await asyncio.sleep(2) # wait for socks()
reader, writer = await asyncio.open_connection(HOST, 443, ssl=True)
# No connection: close, remote side will keep the connection open
writer.write(f"GET / HTTP/1.1\r\nHost: {HOST}\r\n\r\n".encode())
await writer.drain()
# only read the first header line
try:
return (await reader.readline()).decode()
finally:
# closing the writer will properly finalize the connection
# writer.close()
pass
# reader and writer are now unreachable
async def socks():
async with aiohttp.ClientSession(connector=ProxyConnector.from_url(PROXY)) as session:
async with session.get(f'https://{HOST}') as response:
# simulate a session of long lifecycle
# StreamWriter.__del__() will keep unavailable during the period
await asyncio.sleep(5)
return response.status
def summarize():
global BUF, TIMES
if TIMES and BUF:
print(f'... the above {len(BUF.splitlines())} line(s) were repeated {TIMES} time(s) ...')
TIMES, BUF = 0, ''
async def lsof():
global BUF, TIMES
proc = await asyncio.create_subprocess_shell(f"lsof -np {os.getpid()} | grep TCP", stdout=asyncio.subprocess.PIPE)
buf = (await proc.stdout.read()).decode().strip()
if not buf:
return
if buf != BUF:
summarize()
print(buf)
BUF = buf
else:
TIMES += 1
async def amain():
await asyncio.gather(query(), socks())
# The _SSLProtocolTransport object is kept in memory and the
# connection won't be released until the remote side closes the connection
for _ in range(200):
# Just be sure everything is freed, just in case
gc.collect()
await asyncio.gather(asyncio.sleep(1), lsof())
summarize()
def main():
print(f"PID {os.getpid()}")
task = asyncio.ensure_future(amain())
loop = asyncio.get_event_loop()
loop.add_signal_handler(signal.SIGTERM, task.cancel)
loop.add_signal_handler(signal.SIGINT, task.cancel)
loop.run_until_complete(task)
if __name__ == "__main__":
main() $ python3 test.py # v0.8.1
PID ****
$ python3 test.py # v0.8.2
PID ****
python **** **** 6u IPv4 **** 0t0 TCP 127.0.0.1:****->127.0.0.1:1080 (ESTABLISHED)
python **** **** 8u IPv6 **** 0t0 TCP [****]:****->[****]:https (ESTABLISHED)
... the above 2 line(s) were repeated 29 time(s) ...
python **** **** 8u IPv6 **** 0t0 TCP [****]:****->[****]:https (ESTABLISHED)
... the above 1 line(s) were repeated 169 time(s) ... |
You just have to close the writer manually... In any case, closing the connection in the StreamWriter's You can continue experimenting with version 0.8.3 |
Yes, CPython will raise a resource warning in 3.13 (python/cpython#107650). I just meant monkey-patching the stdlib is infectious...
I agree that more consideration is needed before backporting the fix to CPython 3.11. That's why I suggested reporting it to CPython. The leakage of connection, however, is a problem that can be minor or significant. It is hard to tell whether not breaking existing projects is more important than having the issue fixed. At least it is not a bad story since it helped us to find a connection leakage in aiohttp_socks.
The infectivity has gone, but the connection to the proxy server is still leaked.
I've also tried to run my script with CPython 3.11.4 and aiohttp_socks 0.8.0, which shows that the leakage of proxy connections is an existing-for-long issue. If a leakage was found, we'd better fix it. 05f5228 is a nice fix except that stream(s) are stored in a list. I am not pretty familiar with network stuff, what if just store a single stream as a trivial instance attribute? I suppose that, when _wrap_create_connection() is called twice or more, previous stream(s) (and their transports) should have been lost or closed, or else why would it be called again? |
And what? We only fix our "internal" writers, this will not affect the behavior of other writers in any way.
Nothing leaks anywhere. Check your test code. In version 0.8.3 the connector behavior is completely equivalent to version 0.8.0 on Python<3.11.5 |
If such the issue exists, then this is an |
import asyncio
HOST = 'ifconfig.me'
PORT = 80
async def connect() -> asyncio.Transport:
reader, writer = await asyncio.open_connection(
host=HOST,
port=PORT,
)
return writer.transport # type: ignore
async def fetch():
loop = asyncio.get_running_loop()
transport = await connect()
# on Python 3.11.5 transport is already closed here
reader = asyncio.StreamReader(limit=2**16, loop=loop)
protocol = asyncio.StreamReaderProtocol(reader, loop=loop)
transport.set_protocol(protocol)
loop.call_soon(protocol.connection_made, transport)
loop.call_soon(transport.resume_reading)
writer = asyncio.StreamWriter(
transport=transport,
protocol=protocol,
reader=reader,
loop=loop,
)
request = f'GET /ip HTTP/1.1\r\nHost: {HOST}\r\nConnection: close\r\n\r\n'.encode()
writer.write(request)
await writer.drain()
response = await reader.read(-1)
print(response)
writer.close()
if __name__ == '__main__':
asyncio.run(fetch()) The code above works fine on Python < 3.11.5 but fails on 3.11.5 Traceback (most recent call last):
File "/home/roman/projects/python/python-socks/usage_issue_27_reproducer.py", line 34, in <module>
asyncio.run(fetch())
File "/usr/local/miniconda/envs/py311_5/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/miniconda/envs/py311_5/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/miniconda/envs/py311_5/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/roman/projects/python/python-socks/usage_issue_27_reproducer.py", line 27, in fetch
await writer.drain()
File "/usr/local/miniconda/envs/py311_5/lib/python3.11/asyncio/streams.py", line 378, in drain
await self._protocol._drain_helper()
File "/usr/local/miniconda/envs/py311_5/lib/python3.11/asyncio/streams.py", line 167, in _drain_helper
raise ConnectionResetError('Connection lost')
ConnectionResetError: Connection lost |
Just consider a use case like this: async def fetch(session, url):
async with session.get(url) as r:
return await r.text()
connector = ProxyConnector.from_url(PROXY_URL)
async with ClientSession(connector=connector) as s:
tasks = [fetch(s, 'https://google.com/'), fetch(s, 'https://check-host.net/ip')]
result = await asyncio.gather(*tasks)
print(result) |
This should fix an issue when running with python 3.11 (possibly only 3.11.5<= ). ``` 47.45 | I | exchange_rate.CoinGecko | getting fx quotes for EUR 48.18 | E | exchange_rate.CoinGecko | failed fx quotes: ClientOSError('Cannot write to closing transport') Traceback (most recent call last): File "...\electrum\env11\Lib\site-packages\aiohttp\client.py", line 599, in _request resp = await req.send(conn) ^^^^^^^^^^^^^^^^^^^^ File "...\electrum\env11\Lib\site-packages\aiohttp\client_reqrep.py", line 712, in send await writer.write_headers(status_line, self.headers) File "...\electrum\env11\Lib\site-packages\aiohttp\http_writer.py", line 130, in write_headers self._write(buf) File "...\electrum\env11\Lib\site-packages\aiohttp\http_writer.py", line 75, in _write raise ConnectionResetError("Cannot write to closing transport") ConnectionResetError: Cannot write to closing transport The above exception was the direct cause of the following exception: Traceback (most recent call last): File "...\electrum\electrum\exchange_rate.py", line 85, in update_safe self._quotes = await self.get_rates(ccy) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\electrum\electrum\exchange_rate.py", line 345, in get_rates json = await self.get_json('api.coingecko.com', '/api/v3/exchange_rates') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\electrum\electrum\exchange_rate.py", line 69, in get_json async with session.get(url) as response: File "...\electrum\env11\Lib\site-packages\aiohttp\client.py", line 1187, in __aenter__ self._resp = await self._coro ^^^^^^^^^^^^^^^^ File "...\electrum\env11\Lib\site-packages\aiohttp\client.py", line 613, in _request raise ClientOSError(*exc.args) from exc aiohttp.client_exceptions.ClientOSError: Cannot write to closing transport ``` related: romis2012/aiohttp-socks#27 python/cpython#109321
This version has the bugfix for romis2012/aiohttp-socks#27 see 80e330d
Python version: Python 3.8.0
Sample Code
Error
I believe the socks proxy works, as the same async code works using httpx, and synchronous code works using requests-socks
The text was updated successfully, but these errors were encountered: