I'm trying to adapt the solutions here (SQL Delete Rows Based on Another Table) to my needs. E.g.,
DELETE
FROM complete_set
WHERE slice_name IN (SELECT slice_name FROM changes
GROUP BY slice_name HAVING COUNT(slice_name) > 1);
Tables definitions:
- Table1 ... Name: changes, Fields: Id, slice_name, slice_value, Rows: Approx. 100 Thousand.
- Table2 ... Name: complete_set, Fields: Id, slice_name, slice_value, Rows: Approx. 3 million.
While running the query's components individually is extremely fast ...
E.g.,
SELECT slice_name
FROM changes
GROUP BY slice_name
HAVING COUNT(sym) > 1;
(off-the-cuff about a second), and
DELETE FROM complete_set
WHERE slice_name = 'ABC'
(also about a second, or so)
The above solution (w/ subquery) takes too long to execute be useful. Is there an optimization I can apply here?
Thanks for the assist.