5

I'm developing a webapp using Flask-SQLAlchemy and a Postgre DB, then I have this dropdown list in my webpage which is populated from a select to the DB, after selecting different values for a couple of times I get the "sqlalchemy.exc.TimeoutError:".

My package's versions are:

Flask-SQLAlchemy==2.5.1
psycopg2-binary==2.8.6
SQLAlchemy==1.4.15

My parameters for the DB connection are set as:

app.config['SQLALCHEMY_POOL_SIZE'] = 20
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 20
app.config['SQLALCHEMY_POOL_TIMEOUT'] = 5
app.config['SQLALCHEMY_POOL_RECYCLE'] = 10

The error I'm getting is:

sqlalchemy.exc.TimeoutError: QueuePool limit of size 20 overflow 20 reached, connection timed out, timeout 5.00 (Background on this error at: https://sqlalche.me/e/14/3o7r)

After changing the value of the 'SQLALCHEMY_MAX_OVERFLOW' from 20 to 100 I get the following error after some value changes on the dropdown list.

psycopg2.OperationalError: connection to server at "localhost" (::1), port 5432 failed: FATAL:  sorry, too many clients already

Every time a new value is selected from the dropdown list, four queries are triggered to the database and they are used to populate four corresponding tables in my HTML with the results from that query.

I have a 'db.session.commit()' statement after every single query to the DB, but even though I have it, I get this error after a few value changes to my dropdown list.

I know that I should be looking to correctly manage my connection sessions, but I'm strugling with this. I thought about setting the pool timeout to 5s, instead of the default 30s in hopes that the session would be closed and returned to the pool in a faster way, but it seems it didn't help.

As a suggestion from @snakecharmerb, I checked the output of:

select * from pg_stat_activity;

I ran the webapp for 10 different values before it showed me an error, which means all the 20+20 sessions where used and are left in an 'idle in transaction' state.

Do anybody have any idea suggestion on what should I change or look for?

9
  • At a guess, your sessions are being closed properly, so connections are checked out of the pool but never checked back in. Have you got any custom code around getting and returning sessions? What does select * from pg_stat_activity; show you? Commented Jan 7, 2022 at 18:40
  • db.session.commit() did not close connection to database (so your connection is in idle state probably). To close session connection you should call db.session.close(). Commented Jan 7, 2022 at 20:32
  • @snakecharmerb, I set the connection limit to 20 and the max overflow to 20, and checking the pg_stat_activity I got exactly 40 'idle in transaction' connections before I received the error. I'll update the description above with that information. Commented Jan 8, 2022 at 7:27
  • Then it looks like your sessions are not being closed. Flask-SQLAlchemy should be closing (technically, removing) them automatically, so the questions is why isn't this happening in your code? At this point a minimal reproducible example would be useful, because we can only guess as to the cause. Commented Jan 8, 2022 at 7:33
  • @jorzel I tried to use 'db.session.close()' right after I query the DB, but the connections are still with the 'idle in transaction' state. Am I doing it wrong? Commented Jan 8, 2022 at 7:50

3 Answers 3

4

I found a solution to the issue I was facing, in another post from StackOverFlow.

When you assign your flask app to your db variable, on top of indicating which Flask app it should use, you can also pass on session options, as below:

from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy(app, session_options={'autocommit': True})

The usage of 'autocommit' solved my issue.

Now, as suggested, I'm using:

app.config['SQLALCHEMY_POOL_SIZE'] = 1
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 0

Now everything is working as it should.

The original post which helped me is: Autocommit in Flask-SQLAlchemy

@snakecharmerb, @jorzel, @J_H -> Thanks for the help!

Sign up to request clarification or add additional context in comments.

5 Comments

I'm glad you found good results with autocommit=True, but I encourage you to explicitly commit, at least when writing new application code. If you consult docs.sqlalchemy.org/en/14/changelog/migration_14.html you will see that the sqlachemy authors essentially recant / reconsider some design decisions, which I find quite positive. They eschew automagic (like auto commit) and encourage the application programmer to explicitly end each transaction. Again, a with context manager can be a great tool for accomplishing that.
Hi @J_H, thank you very for your suggestion, I implemented a new route in my flask app which is not working with autocommit=True, when I turn it to false, it does work, but the problem I had before comes back again, so I have one problem or another.... I never used a with context manager, I saw in the documentation link you shared before some examples of this method, should I use it together with all "db.session.add" statements I have in my application? I'd thank you much if you can point me an article about the with context.
Please show us the code you're running, perhaps via a github link. You probably want to add exactly one "with" context manager, in a helper routine, and then each chunk of code that does an add will call that central helper. The context manager's greatest contribution is it ensures a commit / rollback happens even after some unexpected exception happens. You make it much harder to lend a hand when you choose not to follow the stackoverflow.com/help/minimal-reproducible-example guidelines for posting example code.
Hi @J_H , thank you very much for teaching me what I should be searching for (with context manager), I solved my issue using "with db.session.begin(): db.session.add(var)". The project is huge (to me :) ), so I can't easily come with a MRE. Sorry.
Fixed my problems as well
3

You are leaking connections.

A little counterintuitively, you may find you obtain better results with a lower pool limit. A given python thread only needs a single pooled connection, for the simple single-database queries you're doing. Setting the limit to 1, with 0 overflow, will cause you to notice a leaked connection earlier. This makes it easier to pin the blame on the source code that leaked it. As it stands, you have lots of code, and the error is deferred until after many queries have been issued, making it harder to reason about system behavior. I will assume you're using sqlalchemy 1.4.29.

To avoid leaking, try using this:

from contextlib import closing
from sqlalchemy import create_engine, text
from sqlalchemy.orm import scoped_session, sessionmaker

engine = create_engine(some_url, future=True, pool_size=1, max_overflow=0)
get_session = scoped_session(sessionmaker(bind=engine))
...
with closing(get_session()) as session:
    try:
        sql = """yada yada"""
        rows = session.execute(text(sql)).fetchall()
        session.commit()
        ...
        # Do stuff with result rows.
        ...
    except Exception:
        session.rollback()

2 Comments

Hi @J_H, thanks for pointing me to the right direction. Not using the Flask-SQLAlchemy and moving to SQLAlchemy would mean a significant change to my code, is there a solution using the Flask-SQLAlchemy package?
Oh, pardon me. Yes, right you are. It wasn't my intent to steer you into significant changes. However, you didn't offer any "leaking code", just some configs, so I recited a familiar pattern from memory. Help me to help you. If you post a new question that demonstrates a leak, I would be happy to help with critiquing / fixing it. Be sure to tag me so I see it. stackoverflow.com/help/minimal-reproducible-example BTW, using a with context handler is kind of important, to ensure you release resources even in the presence of some accidental fatal error in a function you called.
0

I am using flask-restful. So when I got this error -> QueuePool limit of size 20 overflow 20 reached, connection timed out, timeout 5.00 (Background on this error at: https://sqlalche.me/e/14/3o7r)

I found out in logs that my checked out connections are not closing. this I found out using logger.info(db_session.get_bind().pool.status())

def custom_decorator(error_message, db_session):
    def api_decorator(func):
        def api_request(self, *args, **kwargs):
            try:
                response = func(self)
                db_session.commit()
                return response
            except Exception as err:
                db_session.rollback()
                logger.error(error_message.format(err))
                return error_response(
                message=f"Internal Server Error",
                status_code=HTTPStatus.INTERNAL_SERVER_ERROR,
                )
            finally:
                db_session.close()
        return api_request

    return api_decorator

So I had to create this decorator which handles the db_session closing automatically. Using this I am not getting any active checked out connections.

you can use the decorators in your function as follows:

@custom_decorator("blah", db_session)
def example():
    "some code"

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.