August 21, 2020 at 7:32 pm
Table has 30 million rows, 11 GB in size, I need to alter one column from INT into BIGINT. I need to keep downtime to 10 min max.
Presently there are some values in that column, mostly zeroes, these values are no longer needed.
If I update the column to be all nulls before altering will my alter statement run fast? in other words, does the content of the column affect the speed of alter operation?
August 21, 2020 at 10:39 pm
If the data is no longer needed, then create the new column with a new name, drop the old column, then rename the new column to name of the old column.
Oh, when you create the new column declare as allowing nulls or provide a default constraint.
August 22, 2020 at 6:35 pm
Table has 30 million rows, 11 GB in size, I need to alter one column from INT into BIGINT. I need to keep downtime to 10 min max.
Presently there are some values in that column, mostly zeroes, these values are no longer needed.
If I update the column to be all nulls before altering will my alter statement run fast? in other words, does the content of the column affect the speed of alter operation?
Changing the 0's to NULLs isn't going to help at all and may screw up code that uses the 0's as criteria not to mention future code could be a little more complex or even Non-SARGable if you change the 0's to NULLs.
Also, changing the datatype to a larger datatype may result in massive fragmentation due to the expansion of the individual rows even though the data itself doesn't change.
My recommendation would be to copy a half million rows from that table to a test table, setup all the same indexes, and test for these and other aspects so you know for sure.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 23, 2020 at 8:41 pm
As I have said - old values no longer needed, this recommendation is not really applicable
August 23, 2020 at 8:43 pm
My question still stands - does column content (some values vs. all nulls) matter when altering column type from INT to BIGINT?
August 23, 2020 at 9:04 pm
As I have said - old values no longer needed, this recommendation is not really applicable
Actually, if you go back and read, I'm talking about the mess you might make if you change 0's to NULLs because you might not actually know if the old values are actually used or not. Any values left in the original int column will also be present in the column after you update it to big int, as well.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 23, 2020 at 9:07 pm
My question still stands - does column content (some values vs. all nulls) matter when altering column type from INT to BIGINT?
My answer still stands... it won't bother your change from INT to BIGINT but it may bother the hell out of some code you have if you change the 0's to NULLs. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
August 23, 2020 at 9:10 pm
p,s, Just because I'm giving you an answer you don't want to hear or maybe don't understand the ramification of, you should look into what I've said because it's going to be a real embarrassment for you if you break something that you didn't expect to break, especially on a 30 million row table that you might have to restore the database to get a copy so you can fix things if my cautionary tale turns out to be true.
If it were me, I would NOT change the 0's to NULLs during this evolution on a bet.
I'll also state it would take you just minutes to do a small test for this conversion (which you should) on a couple of hundred thousand rows in a test table to make damned sure because even monster forum Ninjas have been known to be incorrect.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 24, 2020 at 1:41 am
Also, I answered saying create a NEW column with a new name using the BIGINT data type with nullable column or a default value, drop the old column, then rename the new column to the old column name.
August 24, 2020 at 5:53 am
If you use the SSMS table designer to change the field from INT to BIGINT, then generate the change script, you will see that under the hood, the table is actually dropped and recreated. So NULL or 0 will have no impact.
NOTE: Doing an in place update will take a long time, and it will blow out your log files. Depending on the size of your log files, you may even run out of drive space, and do a full roll back. There is no guarantee that you can pull it off in under 10 mins.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply