-
Notifications
You must be signed in to change notification settings - Fork 418
Connection not being returned to the pool after connection loss #220
New issue
Have a question about this project? No Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “No Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? No Sign in to your account
Comments
This looks similar to a bug in Cython 0.27.1: cython/cython#1907 I mean this bit: |
Try building from source using the latest Cython, and see if that helps: $ pip install Cython && pip install --no-binary asyncpg asyncpg |
Did what you asked with Cython 0.27.2 and still got the same problem. |
This error |
can you try doing building directly with |
When I used pip, I got the following message
I thought it worked. As I'm using docker make will be a little hard to do, but I'll try. |
Used make and installed the module using Still got the same problem but now the stack trace was different:
The tasks were destroyed as they did not trigger the timeout, but no |
|
Did as you said. Exactly same stack trace. There's a timeout that is not being triggered, I think it's in the fetch function, but I added it explicitly and no changes. |
The |
I'll work on that! |
Got a code to reproduce the bug https://github.com/GabrielSalla/asyncpg_test_code Start the script and wait for some dots to be printed on the screen. As soon the dots start to be printed, drop the connection to the database and wait for the timeout of the As expected, the connections will never return to the pool and there will be the exceptions (including the print of the pool queue size) when the loop is closed. |
Thanks, I'll look into reproducing this on my end. |
Connection.close() and Pool.release() each gained the new timeout parameter. The pool.acquire() context manager now applies the passed timeout to __aexit__() as well. Connection.close() is now actually graceful. Instead of simply dropping the connection, it attempts to cancel the running query (if any), asks the server to terminate the connection and waits for the connection to terminate. To test all this properly, implement a TCP proxy, which emulates sudden connectivity loss (i.e. packets not reaching the server). Closes: #220
Connection.close() and Pool.release() each gained the new timeout parameter. The pool.acquire() context manager now applies the passed timeout to __aexit__() as well. Connection.close() is now actually graceful. Instead of simply dropping the connection, it attempts to cancel the running query (if any), asks the server to terminate the connection and waits for the connection to terminate. To test all this properly, implement a TCP proxy, which emulates sudden connectivity loss (i.e. packets not reaching the server). Closes: #220
Connection.close() and Pool.release() each gained the new timeout parameter. The pool.acquire() context manager now applies the passed timeout to __aexit__() as well. Connection.close() is now actually graceful. Instead of simply dropping the connection, it attempts to cancel the running query (if any), asks the server to terminate the connection and waits for the connection to terminate. To test all this properly, implement a TCP proxy, which emulates sudden connectivity loss (i.e. packets not reaching the server). Closes: #220
That's awesome! Thanks for all the help :) |
No problem! Thanks for bringing this up. |
I was testing the new code but the fetch() timed out and the connection didn't return to the pool right away, it was just when the timeout on the acquire() triggered. Shouldn't it be right after the fetch timeout? |
The connection is returned to the pool when |
Connection.close() and Pool.release() each gained the new timeout parameter. The pool.acquire() context manager now applies the passed timeout to __aexit__() as well. Connection.close() is now actually graceful. Instead of simply dropping the connection, it attempts to cancel the running query (if any), asks the server to terminate the connection and waits for the connection to terminate. To test all this properly, implement a TCP proxy, which emulates sudden connectivity loss (i.e. packets not reaching the server). Closes: #220
Connection.close() and Pool.release() each gained the new timeout parameter. The pool.acquire() context manager now applies the passed timeout to __aexit__() as well. Connection.close() is now actually graceful. Instead of simply dropping the connection, it attempts to cancel the running query (if any), asks the server to terminate the connection and waits for the connection to terminate. To test all this properly, implement a TCP proxy, which emulates sudden connectivity loss (i.e. packets not reaching the server). Closes: #220
Connection.close() and Pool.release() each gained the new timeout parameter. The pool.acquire() context manager now applies the passed timeout to __aexit__() as well. Connection.close() is now actually graceful. Instead of simply dropping the connection, it attempts to cancel the running query (if any), asks the server to terminate the connection and waits for the connection to terminate. To test all this properly, implement a TCP proxy, which emulates sudden connectivity loss (i.e. packets not reaching the server). Closes: #220
Connection.close() and Pool.release() each gained the new timeout parameter. The pool.acquire() context manager now applies the passed timeout to __aexit__() as well. Connection.close() is now actually graceful. Instead of simply dropping the connection, it attempts to cancel the running query (if any), asks the server to terminate the connection and waits for the connection to terminate. To test all this properly, implement a TCP proxy, which emulates sudden connectivity loss (i.e. packets not reaching the server). Closes: #220
the issue with a local PostgreSQL install?: using docker image aidanlister/postgres-hstore
uvloop?: didn't try uvloop
While my application is running some queries I interrupt the connection by removing the ethernet cable from the computer. After doing so some connections are never returned to the pool, even though the timeout is set for the acquire() and fetch() methods. I know they are never returned to the pool because I print the queue size every time it finishes.
I can't send the whole code because it's quite extensive, but the database operations are concentrated in a single file:
After removing the ethernet cable, I wait for some time so an external timeout is triggered (
await asyncio.wait(futures, timeout=30)
). When this happens, the application should have finished all the tasks (if everything went well) and I would be able to finish it safelly. Before letting the loop close, there's a delay and I interrupt the execution using Ctrl+C. It works fine when there are no pending tasks, but when the previous event happens, some of the tasks "lost" are interrupted, generating the a stack trace like the following one.I've tried adding some timeouts in other places, but there's nothing I can do to make it go back to the pool. I even tried to add some logs trying to track where it's happening, but couldn't find it.
A simple version of the application is:
The text was updated successfully, but these errors were encountered: