4

I have a basic question regarding database design. We have a single postgresql database, with one table that is much larger than the other tables. It's an audit trail that logs any and all changes in our software. Is it not smart to do this? Would I be better off storing this in another data store (redis) or another database? Or does Postgresql handle this in a way that a large table won't effect read performance on other tables. How does postgres know which data to cache in memory etc.

Sorry for such a basic and vague question, I'm not sure how else to ask this. I just don't want to get down the road, a year from now, and we're having serious database performance issues because one of our tables is 50gb, while the rest of the database is < 1gb.

Thanks.

1
  • Could you get out of this problem ? I am also facing the same issue at my end. Would like to have your suggestion please... Commented Mar 28, 2016 at 4:02

1 Answer 1

4

We use table partitioning (using PostgreSQL inheritance), the partitions are ~150GB for one month of data. The size of the table isn't that important, it's about your queries and the indexes that are used for these queries.

When working on performance, you should always use real data, real queries, EXPLAIN and EXPLAIN ANALYZE. Without this, it's gonna be a lucky shot.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.