3

I am developing a fastapi server using sqlalchemy and asyncpg to work with a postgres database. For each request, a new session is created (via fastapi dependency injection, as in the documentation). I used sqlite+aiosqlite before postgres+asyncpg and everything worked perfectly. After I switched from sqlite to postgres, every fastapi request crashed with the error:

sqlalchemy.dialects.postgresql.asyncpg.InterfaceError - cannot perform operation: another operation is in progress

This is how I create the engine and sessions:

from typing import Generator
import os

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine


user = os.getenv('PG_USER')
password = os.getenv('PG_PASSWORD')
domain = os.getenv('PG_DOMAIN')
db = os.getenv('PG_DATABASE')

# db_async_url = f'sqlite+aiosqlite:///database.sqlite3'
db_async_url = f'postgresql+asyncpg://{user}:{password}@{domain}/{db}'

async_engine = create_async_engine(
    db_async_url, future=True, echo=True
)

create_async_session = sessionmaker(
    async_engine, class_=AsyncSession, expire_on_commit=False
)

async def get_async_session() -> Generator[AsyncSession]:
    async with create_async_session() as session:
        yield session
1

1 Answer 1

12

The error disappeared after adding poolclass=NullPool to create_async_engine, so here's what engine creation looks like now:

from sqlalchemy.pool import NullPool

...

async_engine = create_async_engine(
    db_async_url, future=True, echo=True, poolclass=NullPool
)

I spent more than a day to solve this problem. I hope my answer will save a lot of time for other developers. Perhaps there are other solutions, and if so, I will be glad to see them here.

Sign up to request clarification or add additional context in comments.

5 Comments

Also, you can try to use dispose from your SQLAlchemy engine. Look here github.com/tiangolo/fastapi/issues/1800#issuecomment-1260777088
Ok but NullPool disables connection pooling, so that's not ideal.
Note when usingpoolclass == NullPoll 1. only be used in synchronous calls (db-api) 2. causes opening and closing connection for every query 3. pooling strategy becomes useless
atleast error message changed after this.
Notes: 1. I'm using pgbouncer so we don't need the built-in connection pooling anyway 2. The error appeared only during pytest run. It might be a pytest specific issue, or I just hadn't enough load to get this during runtime.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.