Depending on the situation you may create a string from the array and query it with regex/like.
Which sounds like a naive approach turns out to be faster than the plpgsql function of the accepted answer. At least in the following situation:
drop table if exists temp;
create table temp as
SELECT array[
chr((floor(random()*26) + 65)::int),
chr((floor(random()*26) + 65)::int),
chr((floor(random()*26) + 65)::int),
chr((floor(random()*26) + 65)::int),
chr((floor(random()*26) + 65)::int),
chr((floor(random()*26) + 65)::int)
] as a FROM GENERATE_SERIES(1, 1000000);
Query with like
\timing
select count(*) from temp
where array_to_string(a, '>>') like '%A>>B%';
count
-------
7505
(1 row)
Time: 270,404 ms
Query with function
select count(*) from temp
where index_of_subarray (a, array['A', 'B']) != 0;
count
-------
7505
(1 row)
Time: 1999,002 ms (00:01,999)
The difference in performance becomes even more clearly when you need to match wildcards:
select count(*) from temp
where array_to_string(a, '>>') like '%A>>B%>>C>>D%';
count
-------
7
(1 row)
Time: 173,343 ms
With function
select count(*) from temp
where index_of_subarray (a, array['A', 'B']) != 0
and index_of_subarray (a, array['A', 'B']) < index_of_subarray (a, array['C', 'D']);
count
-------
7
(1 row)
Time: 1999,791 ms (00:02,000)
I'm astonished of the difference in performance.
Maybe it's because plpgsql is so much slower than the native implementation of Regex. Which is most probably very optimized.
Maybe there is something special in the test situation created that leads to the advantage in performance.
Furthermore to make the stringify solution reliable you need to supply a delimiter to array_to_string which does not occur in the array elements. There are situations where this may not be feasible.
But apart from that you may consider to make a string from the array and query it with like/regex.
@>operator won't help here since it doesn't respect order, it's a set operation.