2

I have a server with 64GB RAM and PostgreSQL 9.2. On it is one small database "A" with only 4GB which is only queried once an hour or so and one big database "B" with about 60GB which gets queried 40-50x per second!

As expected, Linux and PostgreSQL fill the RAM with the bigger database's data as it is more often accessed.

My problem now is that the queries to the small database "A" are critical and have to run in <500ms. The logfile shows a couple of queries per day that took >3s though. If I execute them by hand they, too, take only 10ms so my indexes are fine.

So I guess that those long runners happen when PostgreSQL has to load chunks of the small databases indexes from disk.

I already have some kind of "cache warmer" script that repeats "SELECT * FROM x ORDER BY y" queries to the small database every second but it wastes a lot of CPU power and only improves the situation a little bit.

Any more ideas how to tell PostgreSQL that I really want that small database "sticky" in memory?

3
  • What if you used an in-memory instance of SQLite to hold the 4Gig database? Looking through the documentation for PostGresql, it doesn't look like it's really designed to run exclusively in memory, unless you run it on a ram disk. Commented Jul 17, 2013 at 16:22
  • @RobertHarvey Pretty much; Pg doesn't offer table/index pinning at this time. Commented Jul 19, 2013 at 1:18
  • If you running the queries by hand takes only 10ms, and yet your cache warmer doesn't improve the not-by-hand situation, then your cache warmer is running the wrong queries and warming the wrong part of the cache. Why the order by? Commented Mar 7, 2016 at 18:59

1 Answer 1

1

PostgreSQL doesn't offer a way to pin tables in memory, though the community would certainly welcome people willing to work on well thought out, tested and benchmarked proposals for allowing this from people who're willing to back those proposals with real code.

The best option you have with PostgreSQL at this time is to run a separate PostgreSQL instance for the response-time-critical database. Give this DB a big enough shared_buffers that the whole DB will reside in shared_buffers.

Do NOT create a tablespace on a ramdisk or other non-durable storage and put the data that needs to be infrequently but rapidly accessed there. All tablespaces must be accessible or the whole system will stop; if you lose a tablespace, you effectively lose the whole database cluster.

Sign up to request clarification or add additional context in comments.

2 Comments

Unfortunately, I don't think there is any guarantee the OS won't decide to page-out a mostly-idle server's shared_buffers.
@jjanes Good point, at least with POSIX shmem used on 9.3(?)+. I'm not sure sysv shmem is pageable. Maybe you'd need an extension that mlock()s shared_buffers.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.