-
Notifications
You must be signed in to change notification settings - Fork 418
How are Postgres server restarts handled? #421
New issue
Have a question about this project? No Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “No Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? No Sign in to your account
Comments
The pool is designed to handle connections being closed and to reconnect automatically as long as that is possible to do using the original connection parameters. The closest thing to being able to handle "on close" on individual connections currently is to subclass |
it would seem that even A demo of it in use:
As you can see, _cleanup was not called until I manually ran So this does not seem to be a way to handle "on close". |
Hi guys,
We are using asyncpg in a web application.
Upon starting the web server, we do the following two things:
db = asyncpg.create_pool
to create the connection pool for the application's DB connections.conn = await db.pool.acquire(); await conn.add_listener('foo', _callback)
to register long-lived listeners to catch and handleNOTIFY
events from the DB.Recently I restarted the Postgres server and noticed something interesting. First, connections to the DB using the
db
pool object saw no interruption and were still able to connect to the DB server despite it having a new PID (since the process had been restarted).However, the
conn
the application relied on to receive async notifications from the server was dead/closed and so the application logic dependent upon receiving those notifications was not functioning. (This is how I reproduced this issue and discovered the cause to have been a PG server restart.)conn.is_closed
periodically - or, is there some "on connection close" callback I can define to retry/reopen a connection (since obtaining new connections from the pool still works after restart)?The text was updated successfully, but these errors were encountered: