This is related to sqlalchemy and pg8000.
I have read everywhere that I should close the ResultProxy object so that connection could be returned to the pool.
The local test database allows a maximum of 100 connections:
$ psql -h 127.0.0.1 -U postgres
Password for user postgres:
psql (9.5.5, server 9.6.0)
WARNING: psql major version 9.5, server major version 9.6.
Some psql features might not work.
Type "help" for help.
postgres=# show max_connections;
max_connections
-----------------
100
(1 row)
The following test script creates an engine in every loop and does not read nor close the ResultProxy object. It really is as bad as it can get.
The weird thing is, it also does not generate a too many connections kind of error. This is really confusing to me. Does sqlalchemy performs some magic? Or maybe postgres is actually magic?
#!/usr/bin/env python2.7
from __future__ import print_function
import sqlalchemy
def handle():
url = 'postgresql+pg8000://{}:{}@{}:{}/{}'
url = url.format("postgres", "pass", "127.0.0.1", "5432", "usercity")
conn = sqlalchemy.create_engine(url, client_encoding='utf8')
meta = sqlalchemy.MetaData(bind=conn, reflect=True)
table = meta.tables['events']
clause = table.select()
result = conn.execute(clause)
if __name__=='__main__':
for i in range(2000):
print(i)
handle()