This sounds like an issue I've run into before (or some variation of it), but without access to your servers or the ability to reproduce it, I can't be sure.
If you have a lot of queries which select the minimum/maximum value from an indexed column in that table, they will normally be satisfied instantly by consulting the end points of the index. But when many rows at the end have been deleted, it needs to walk back until it finds the extreme point which is still live. This can take a while when you have so many deleted rows that need to be walked past. Once the deleted tuples are "dead to all" (all transactions open at the time of the DELETE have gone away) you should be able to set hint bits on the tuples themselves and on the index entries ('microvacuum' or 'killed items' or 'index hint bits') which solve or at least ameliorate the problem, and then it should finally be completely resolved by vacuum which should remove not just the index entries, but also index pages which are full of nothing but dead entries. But until all long-live transaction/snapshots go away, none of these mechanism will work, so make sure you don't have open transactions.
In summary, autovacuum isn't causing the problem. Rather, it is trying to fix the problem and just hasn't finished yet, or has been frustrated by open snapshots.
In older version, the same thing can happen when doing cost estimates for mergejoins. The planning process did the same kind of end-point probing, and suffered the same problem. I think that that issue had been fixed by v14, though.