We have about 100 simple changes to make to our DB schema like this:
alter table transactions alter customer_sport_id type bigint;
Before it was int4. Most of the changed columns have one or more indexes.
Each is taking about 30-45 minutes on a powerful dedicated RDS instance (db.r6i.4xlarge) with no other load.
We have to commit after each line to avoid using up the entire storage.
The problem is that its slow slow, it will take days to make the changes, and we cant be down that long.
Is there anything we can do to speed up these? E.g.
- dropping indexes then creating them again after? (would this speed it up?)
- disabling WAL? Not sure if this is feasible or is risky (e.g. can DB get corrupted if migration fails half way through)
- Creating a new table, then some how copying all the old data across to the new table (could we do this in SQL, or would it require a stored procedure?), dropping the old table, then creating the sequences and indexes on new table?
Apparently , we run vacuum once a week.
Here is the Database performance stats for the last hour (you can see from the release of storage that two statements have completed):
