6

I get

TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30

I have seen the questions:

as well as their answers. I tried them and I tried to reproduce my problem in a very simplified codebase. I can't reproduce it.

What can I do to find / fix the problem?

For example, can I get the current "usage" of the QueuePool within Python?

(Also, but probably better another question: Why shouldn't I set the queue overflow to -1?)

What I didn't try so far is setting app.config['SQLALCHEMY_POOL_RECYCLE'] (to be set here). I guess I would have to set it to a low value in order to fix this problem, but I might get timeouts in other places then, right?

1 Answer 1

2
  1. Run show PROCESSLIST against your MySQL database to show the connections / pool / usage.

    • In PyCharm, open the "Database" sidebar (usually on the right side), then right-click on your database / schema and select "Open Console". Paste in the command above and click the green "Execute" button.
    • What I noticed was that when I was using the API, each API call would open up a separate connection, and when they hit 20 connections, the next API call would cause the error, which would say TimeoutError: QueuePool limit of size 10 overflow 10 reached (10 + 10 = 20).
    • When I got rid of threaded=True, I only ever saw one connection open.
  2. Commit or stash all of your existing changes, then create a branch and start stripping out code / simplifying your app piece-by-piece and seeing if the issue still pops up.

    • I didn't use this technique in this case, but I've used it successfully when dealing with other seriously-hard-to-track-down bugs like this.
    • Since you mentioned not being able to reproduce the issue with a simple example, I figured it'd be worth suggesting this technique as a way of arriving at a minimal example.
  3. (In combination with #2) Try reducing the SQLALCHEMY_POOL_SIZE Flask-SQLAlchemy setting to make it quicker to reproduce the bug.

    • The idea here is to make the crash happen sooner.
    • I didn't do this in this case.
Sign up to request clarification or add additional context in comments.

5 Comments

Hm, I'll check that tomorrow. If you don't have threaded=True, will multiple users then be able to use the site at the same time (even if there are long-running queries)?
My understanding is that your dev server will not be able to handle multiple requests concurrently, but that your prod server should be using a WSGI server like gunicorn or uWSGI that will be able to spawn multiple instances of your app. To elaborate on the "your prod should be using a WSGI server": I looked at the Flask source and for the run() function it says "Do not use run() in a production setting. It is not intended to meet security and performance requirements for a production server." You can also see them say that here: flask.pocoo.org/docs/0.12/deploying
Ok, but I don't use app.run anyway (see the link). I use WSGIServer
Hmm...then I guess all I can offer are my "How to investigate the issue" tips.
In the interest of keeping this question on-topic, I've moved the "my situation" and "how I fixed it" parts of my answer to this question: stackoverflow.com/a/47151701/4115031

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.