March 4, 2019 at 4:01 pm
So I've got a situation here where I'm cleaning up the mess of a previous DBA. There are two tables where there is a primary key/non-clustered index that has the same exactly column(s) as non-primary key/clustered index. Unfortunately these 2 tables are on numerous databases (more than 30).
Basically, I want to remove the clustered index and make the primary key clustered. However, the traditional way involves dropping the clustered index (both tables are over 50 million rows on each database, so that takes a while), temporarily removing all the foreign keys into the table, then removing the primary key. Then add the primary key back with a clustered index and readd the foreign keys. Things get speeded up a little bit if I also remove and readd the other non-clustered indexes, but if I do that, that would involve major downtime, so I really can't remove indexes. As it turns out though, even though the drop and add are online operations, there still is significant blocking going on.
Ideas that I've thought about are either 1) simply remove the primary key and keep everything else (very fast to do), but not sure what types of side effects that may have, or 2) if there was only a way to update the sys.indexes table and update the is_primary_key column, but I know that since 2000 that isn't possible. In lieu of 1) and 2) not being ideal, I was wondering if anyone else had any other brilliant ideas. I can also copy the table and do the rename thingy with the index names and foreign key names, which is probably what I'll wind up doing, but I wanted to see if there are some better ideas out there.
Dropping the clustered index, which probably means moving all the data from the leaf-level to some heap, take close to an hour to do, and readding the primary key clustered index along with all the non-clustered indexes also takes close to an hour if done singularly. That is for one database, and I have to do this for 30 of them. Unfortunately this is a company that only gets one 90 minute downtime window every 3 months without special dispensation, and that window is coming up in 2 weeks. I doubt if I can fit this in the 90 minute window especially since 90% of the databases are on one server.
March 4, 2019 at 4:18 pm
One possibility - depends on how frequent updates/inserts/delets are done on those 2 tables.
Also assumes you have enough space available to hold an extra copy of the tables.
I've used similar process on other cases where volumes were high and down time short.
Following applies to both tables
- create a partitioned table with same structure of the original table
- copy all data from source table to this one
- create all required indexes, including desired pk/clustered index. all indexes partitioned - single partition for whole table
at implementation window
- bring db to single mode (or find a way to prevent any updates to the 2 tables involved)
- perform delta update of source table over partitioned table
- remove all FK's referencing that table
- truncate table
- delete and add new desired indexes (very fast as table is empty)
- add constraint on partition column to match partition criteria
- switch partitioned table to original table - this takes milliseconds as it is a metadata change only - and as all indexes are already in place on the partitioned table there is no rebuild required.
- add FK's
- drop constraint on partition column (as no longer required)
Of the above Adding FK's back is what may be slow - will depend on volumes, number of FK', indexes in place to support them.
if you have a full copy of that db in DEV you can try it out to see how long it takes - but
If all 30 DB's are on the same server you are going to face another issue (competition over server resources for rebuilding FK's and doing deltas)
And you would need more than 1 person doing the changes, or use a multi threaded option to kick off the steps above in parallel for multiple db's
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply