July 29, 2022 at 4:56 pm
What is the general opinion on doing column datatype alters on a large table?
I've got a table (SQL 2019 if that matters), ~500m rows, 104g in storage that has 2 columns currently as datetime, that need to be expanded to datetime2 for a new implementation and increased timestamp precision. I am fairly certain that doing a straight alter column will fill logs/tempdb (along with all the other catalog complications that come with such a large operation) in a most undesirable way.
Would you toss new columns on the end, update top(nn) then rename/drop columns in a loop? Create a new table and do insert chunks then rename/drop? Are there any other options that won't cause huge log operations and/or catalog locking?
Thanks in advance.
July 29, 2022 at 5:27 pm
For example purposes, I'll assume your clustered index is on a $IDENTITY column, say "id".
(10) Create a new table, with the new data types, and the clustered index (only!) from the original table.
(20) Get the max id value for the current table = @max_current_key_value.
(30) INSERT all rows into the new table WHERE id <= @max_current_key_value. (Don't do it in batches, do all rows at once, that should allow min logging throughout.)
(40) Create all the nonclus indexes on the new table.
(50) Stop other activity on the main table.
(60) Copy rows > @max_current_key_value to the new table.
(70) Rename the original table to some other name, rename the new table to the original table name.
(80) Restart activity on the table.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
July 29, 2022 at 5:48 pm
Thank you Scott; I had not considered having a fixed start point, working backwards, then go pick up new rows "after" (step 60). This does remove the concern about accomplishing this task in a day. (This is an audit table).
Now I just need to make sure there is 500 extra gig available in the filegroup this table lives in ... (not my server).
Thanks for the quick answer and validation.
July 29, 2022 at 5:49 pm
PS... my version of money and happiness...
"Money can't buy happiness, but it can buy you a sailboat big enough that you can sail right up next to it."
July 29, 2022 at 6:22 pm
You're welcome.
Yeah, you'll need lots of extra disk space.
Presumably the log file is already a big size for a db that includes a table that size. But you might want to pre-allocate more space to the log if you have it available, just in case. It will take a while for the drive system for pre-format log file space (exactly how long depends on the speed of your drives), but you do want the log file having to grow dynamically during the INSERT: that definitely slows down the process.
Once the new INSERT is over, and you're comfortable with it, you can drop the original table. Unless it's in a different filegroup, or the new table is in a different filegroup, you won't be able to reclaim the space. So you may want to consider putting the new table in its own filegroup, if you're allowed and want to do that at your shop.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
July 30, 2022 at 3:16 pm
Have you considered using BCP to export all the records, rename the old table, create a new table with the corrected data types, and then BCP in all of the records? Character mode should not be that large of a file, and it will not blow up the transaction log.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
July 31, 2022 at 3:49 pm
Since this is an audit table - you could get away with just creating a new table in the new structure and then swap in the new table for the old table. Auditing would then start in the new table from that point forward and the old table would still be available.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
July 31, 2022 at 6:22 pm
Have you already started on this? I'm asking because I see some red flags in some of the posts above but don't want to waste my time advising if you've already done the dead or have started in on this task.
And I like Jeffrey Williams' idea of simply starting a secondary table, in this case, perhaps even using a "Partitioned VIEW" to combine the two. both for query purposes and insert purposes. It will depend a bit on what your audit triggers look like.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 31, 2022 at 6:30 pm
PS... my version of money and happiness...
"Money can't buy happiness, but it can buy you a sailboat big enough that you can sail right up next to it."
Old and quite wise saying that my parents learned the hard way...
"The two best days of owning a boat are the day you buy it and the day you get rid of it". 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply