0

I'm using a Postgres database to keep track of item data across many different groups. Each group (A, B, C, ...) has items with IDs but with different variations on some of their properties. For that reason, I was going to use an array to track item IDs with their corresponding stat and group ID.

However, I've read arrays can be a major slowdown with Postgres. Is there a specific way I should be using arrays with Postgres? I would need to be comparing an item's stats from different groups repeatedly. Or is this kind of setup fine? There will be ~5000 groups, ~50,000 unique items, and ~4 stats on each item.

So given item A, the data would be like:

ITEM A:
[group=A, statA=592, statB=128, statC=120, statD=9]
[group=B, statA=999, statB=12, statC=491, statD=99]
...
2
  • 2
    It is not a performance issue. It is a normalization one. The array will make everything harder. Just do proper normalization. If you don't know what that means then ask how to do it. Commented Feb 16, 2017 at 19:00
  • Possible duplicate of Postgresql - performance of using array in big database Commented Feb 16, 2017 at 19:50

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.