3

I'm trying to prevent the user to insert more then 1 unique array of strings into the table.

I have created a Unique Constraint on the array: CONSTRAINT users_uniq UNIQUE(usersArray),

but the user can still insert the same values to the array but in a different order. My table:

id usersArray
1 {011,123}
2 {123,011} // should not be possible

Input : {011,123} --> error unique // the right error

Input : {123,011} --> Worked // Should have return an error instead

How can I make the value {123,011} and {011,123} considered the same?

3
  • You basically want to treat your array as a set. The postgres documentation explicitly states that this could be a sign of a database misdesign. Commented Dec 30, 2021 at 14:02
  • @SebDieBln I will look into that, but from the documentation it says ...searching for specific array elements can be a sign of database misdesign. I do not wish to search on the array, just making sure the client cant insert the same row twice. Commented Dec 30, 2021 at 14:20
  • Yes, you are correct. We do not have the typical case of "Give me all IDs where user X is within the usersArray". Commented Dec 30, 2021 at 15:54

2 Answers 2

3

The trigger solution is not transparent as it is actually modifying the data. Here is an alternative. Create array_sort helper function (it might be useful for other cases too) and an unique index using it.

create or replace function array_sort (arr anyarray) returns anyarray immutable as 
$$
  select array_agg(x order by x) from unnest(arr) x;
$$ language sql;    

create table t (arr integer[]);
create unique index tuix on t (array_sort(arr));

Demo

insert into t values ('{1,2,3}'); -- OK
insert into t values ('{2,1,3}'); -- unique violation
select * from t;
arr
{1,2,3}
Sign up to request clarification or add additional context in comments.

1 Comment

I think this is the best solution, because it is transparent (it does not modify the original data) and it is modular (it introduces a function that can be reused).
1

A trigger which enforces the order of the items in the array could be one approach. Here's an example:

The fiddle

CREATE TABLE test ( arr int ARRAY, unique (arr) );

CREATE FUNCTION test_insert_trig_func()
RETURNS trigger AS $$
BEGIN
    NEW.arr := ARRAY(SELECT unnest(NEW.arr) ORDER BY 1);
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER test_insert_trig
BEFORE INSERT ON test
FOR EACH ROW
EXECUTE PROCEDURE test_insert_trig_func()
;

INSERT INTO test VALUES ('{1, 2}');

INSERT INTO test VALUES ('{2, 1}');  -- Generates a unique constraint violation

SELECT * FROM test;

The result:

arr
{1,2}

6 Comments

It did worked but I see its looping on each row before the INSERT. There is any other way that is more time efficient?
@shany119 The loop is checking each new row to be INSERTed. That's all. That's the way any trigger would process incoming rows, since you want each row to be checked. If your INSERT involves one row, the loop will iterate once, to process that one row.
@shany119 An alternative is to use a staging table. INSERT your incoming rows into that table. Now you can INSERT INTO target SELECT ARRAY(SELECT unnest(arr) ORDER BY 1) FROM staging; DELETE FROM staging; .. or use a MERGE operation from staging to target. That target table wouldn't need a trigger now. Just the unique constraint would be enough.
@JonArmstrong But both solutions change the original order of the array elements and that may be important.
@Stefanov.sm That is correct, and that may be acceptable. I did mention: which enforces the order of the items in the array
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.