-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't acquire connection from pool #113
Comments
Hi async with engine.acquire() as conn:
await conn.execute("SELECT 1;") I think it is more likely issue with transaction. |
Hi! Thanks for feedback. I have modified the code as following:
but the problem persists I provide a load of 120 concurrent requests during 10 minutes and after a while the service hangs again. In aiopg logs I see something like this:
... and then the service hangs... |
Note that pool maxsize is 10 connections as set by default. If I increase the maximum pool size, load test has been passed |
It seems to me, when pool size goes to max limit size another connections does not return to pool and aiopg can`t acquire new connection from the pool or something like this. |
From logs, I see you still do inserts... I suspect that connection stuck in unknown state due to transaction error or so, I may be wrong. |
If U wish U can run this code with pool max size = 10 (by default) and provide load about 120 concurrent requests during some time and see that insert has stopped and service hangs.... The point is, when I increase max pool size e.g. to 1000 the load test has been passed |
May be it is not the problem and I can set max pool size veeeeeery long, but I wonder why the service hangs on small pool size. |
If I press KeyboardInterrupt I get this
|
I think the question should be "is it possible to get another connection from pool if previous not returned yet"? |
I have some like this issue. It was print before
error: |
the current pool implementation does not have a timeout, and you can deadlock it easily if you acquire two connections in a nested manner attaching an easier test script to reproduce import asyncio
import logging
import aiopg.sa
logger = logging.getLogger()
CONN_STRING = 'postgresql://test:[email protected]/aiopg_test'
async def create_engine(pool_size):
return await aiopg.sa.create_engine(
CONN_STRING,
maxsize=pool_size
)
async def q2(engine, task_id):
logger.debug('starting task {}'.format(task_id))
async with engine.acquire():
logger.debug('acquired conn task {}'.format(task_id))
pass
logger.debug('released conn task {}'.format(task_id))
async def q1(engine, task_id, nested):
logger.debug('starting task {}'.format(task_id))
async with engine.acquire():
logger.debug('acquired conn task {}'.format(task_id))
if nested:
await q2(engine, task_id)
async def main(concurrency, iterations, pool_size, nested, loop):
engine = await create_engine(pool_size)
for n in range(iterations):
print('iteration={n}'.format_map(locals()))
tasks = [asyncio.ensure_future(q1(engine, i, nested), loop=loop) for i in range(concurrency)]
await asyncio.wait(tasks, loop=loop)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--concurrency', type=int, default=10)
parser.add_argument('-i', '--iterations', type=int, default=10)
parser.add_argument('-p', '--pool-size', type=int, default=10)
parser.add_argument('-n', '--nested', action='store_true')
args = parser.parse_args()
loop = asyncio.get_event_loop()
logging.basicConfig(level=logging.DEBUG, format='%(funcName)s %(lineno)d: %(message)s')
loop.run_until_complete(main(args.concurrency, args.iterations, args.pool_size, args.nested, loop=loop)) not nested, everything ok python test_pool.py -c 1000 nested, if concurrency is higher than pool size it deadlocks
|
related to #16 |
This is good news. We will look forward to the completion of the task #16 , but it is open from 2014 :-) |
task #16 it is not really needed, see this comment #125 (comment) as of that comment, we might even consider closing this current issue as invalid @jettify |
Use |
@mpaolini how can be rewritten your code above to solve the problem with nested, if concurrency is higher than pool size? |
For your case just pass already acquired connection to
|
Of course it solves, but what about nested connection acquiring as @mpaolini said, is it possible? |
Hi!
I have got the problem with aiopg when acquiring new connection from pool. I have used a simple code for this:
Then I provide a load of 100 concurrent requests and after a while the service hangs. It seems to me that aiopg can not get another connection from the pool, because when I increase the maximum pool size load test has been passed
I have used the following:
aiopg-0.9.2
psycopg2-2.6.1
SQLAlchemy-1.0.12
tornado-4.3
The text was updated successfully, but these errors were encountered: