I got a PHP web application which Postgres database, which gets lots of queries pers second (300-600). Database server is not that efficient so I get lots queries timeouts. Is there any mechanizm that would make a queries queue before running them on database? I see it as a proccess/deamon which takes i.e. 50 queries from queue, runs them on database, waits for all to finish and get next 50 queries, etc.
1 Answer
Is there any mechanism that would make a queries queue before running them on database
Yes - PgBouncer in transaction pooling mode can queue up statements.
It's highly likely that you probably just need to tune your DB server and deal with poorly performing queries, though. Check out pg_stat_statements, auto_explain, explain analyze, etc. Read the wiki article on connection counts and slow queries.
If possible, fix your client application so it does fewer queries that each do more work - do set operations.
For writes (insert/update/delete) batch them into transactions, and if possible run fewer statements that do more work. In particular, don't loop over an SQL statement where possible.
If you must use lots of small writes, look at setting a commit_delay or (if you can accept the data loss window) setting synchronous_commit = off for less important writes.
If your queries are mostly reads, I strongly recommend introducing a mid-level cache like the Redis in-memory key/value store, or Memcached. Have your application manage this cache to avoid repeatedly fetching data that doesn't have to be perfectly fresh, or doesn't change super fast. You can use PostgreSQL's LISTEN and NOTIFY features to implement fine-grained cache invalidation.
4 Comments
max_connections, a description of the task your app is doing, how that task is triggered, what you do with the result, etc.