Assuming the following setup:
create table data (id serial, kv jsonb, col1 text);
insert into data (kv, col1)
values
('[{"k1": "v1"}, {"k2": "v22"}]', 'web'),
('[{"k10": "v5"}, {"k9": "v21"}]', 'mobile'),
('[{"k1": "v1"}, {"k5": "v24"}]', 'web1'),
('[{"k5": "v1"}, {"k55": "v24"}]', 'web1');
You can get those rows by first normalizing the data, then doing a self join on the normalized data. To normalize the data you need to unnest the JSON values twice: once for flattening the arrays and then another time to extract the keys from the JSON values:
with normalized as (
select d.id, t2.*
from data d
join jsonb_array_elements(kv) as t1(kv) on true
join jsonb_each_text(t1.kv) as t2(k,val) on true
)
select n1.*
from normalized n1
where exists (select *
from normalized n2
where n1.id <> n2.id
and n1.k = n2.k);
The above returns:
id | k | val
---+----+----
1 | k1 | v1
3 | k1 | v1
3 | k5 | v24
4 | k5 | v1
Or use it with an IN condition to get the original rows:
with normalized as (
select d.id, t2.*
from data d
join jsonb_array_elements(kv) as t1(kv) on true
join jsonb_each_text(t1.kv) as t2(k,val) on true
)
select *
from data
where id in (select n1.id
from normalized n1
where exists (select *
from normalized n2
where n1.id <> n2.id
and n1.k = n2.k))
returns:
id | kv | col1
---+--------------------------------+-----
1 | [{"k1": "v1"}, {"k2": "v22"}] | web
3 | [{"k1": "v1"}, {"k5": "v24"}] | web1
4 | [{"k5": "v1"}, {"k55": "v24"}] | web1
This type of query would be easier if you didn't store the key/value pairs in an array, '{"k1": "v1", "k2": "v22"}' would make a lot more sense to me than [{"k1": "v1"}, {"k2": "v22"}]
column? Is that "column" related to your question?