Skip to content

Support timeouts in Connection.close() and Pool.release() #222

New issue

Have a question about this project? No Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “No Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? No Sign in to your account

Merged
merged 3 commits into from
Nov 15, 2017
Merged

Conversation

elprans
Copy link
Member

@elprans elprans commented Nov 2, 2017

Connection.close() and Pool.release() each gained the new timeout
parameter. The pool.acquire() context manager now applies the
passed timeout to __aexit__() as well.

Connection.close() is now actually graceful. Instead of simply dropping
the connection, it attempts to cancel the running query (if any), asks
the server to terminate the connection and waits for the connection to
terminate.

To test all this properly, implement a TCP proxy, which emulates sudden
connectivity loss (i.e. packets not reaching the server).

Closes: #220

@elprans elprans requested a review from 1st1 November 2, 2017 21:33
@elprans elprans force-pushed the timeouts branch 2 times, most recently from 2dcfa46 to dcb3816 Compare November 4, 2017 19:37
finally:
self._con = None

else:
try:
await self._con.reset()
budget = timeout
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Handle when timeout is None

@@ -353,6 +353,9 @@ cdef class BaseProtocol(CoreProtocol):
# Abort the COPY operation on any error in
# output sink.
self._request_cancel()
# Make asyncio shut up about unretrieved
# QueryCanceledError
waiter.add_done_callback(lambda f: f.exception())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure you have a warning here, as we fixed that stale cancellation exception warning in 3.6.2

Copy link
Member Author

@elprans elprans Nov 6, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually not about the asyncio.CancelledError, this is asyncpg.QueryCanceledError, which now propagates when an operation is cancelled.

asyncpg/pool.py Outdated
@@ -201,7 +201,8 @@ def __init__(self, pool, *, connect_args, connect_kwargs,
await asyncio.wait_for(
self._con._protocol._wait_for_cancellation(),
timeout, loop=self._pool._loop)
budget -= time.monotonic() - started
if budget is not None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a test that would fail without this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

When running against a temporary cluster, make sure the default
superuser and database name are 'postgres'.

When PGHOST environment variable is specified, rely on the default
connection spec heuristics.
Connection.close() and Pool.release() each gained the new timeout
parameter.  The pool.acquire() context manager now applies the
passed timeout to __aexit__() as well.

Connection.close() is now actually graceful.  Instead of simply dropping
the connection, it attempts to cancel the running query (if any), asks
the server to terminate the connection and waits for the connection to
terminate.

To test all this properly, implement a TCP proxy, which emulates sudden
connectivity loss (i.e. packets not reaching the server).

Closes: #220
@elprans elprans merged commit bdfdd89 into master Nov 15, 2017
@elprans elprans deleted the timeouts branch November 15, 2017 20:05
No Sign up for free to join this conversation on GitHub. Already have an account? No Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Connection not being returned to the pool after connection loss
2 participants