February 14, 2012 at 9:16 am
I have a multi-million row table in which I need to change the data type of a column from INT to BIGINT.
What is the best way to achieve this in a database that is in FULL recovery so that the transaction-log size can be managed?
We take transaction-log backups every 10 minutes.
I read this link - http://www.dbforums.com/microsoft-sql-server/1654649-changing-datatype-big-table-best-way.html - and it is suggested there to:
(1) create a new table with the correct datatype (BIGINT) for the column in question;
(2) load the new table with the data from the original table in chunks; that will ensure the transaction log is truncated with each log backup so the size of the log stays manageable;
(3) rename the original table as tblName_old;
(4) rename the new table as the name of the original table;
(5) build constraints and indexes on the new table, per the original table.
Any comments on this anyone?
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
February 15, 2012 at 3:18 am
I did this and offhand can't remember exactly what I did - I think I dropped the PK, altered the column, rebuilt the PK.
The tables in question had int PKs on an identity column.
I'd have probably done this in a release outage where I put the db into simple recovery. I reset my log shipping from a full backup afterwards, after putting the dbs back into full recovery.
Put it up on a test server and check, I'm sure it was a pretty painless operation as I did several tables.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 15, 2012 at 3:20 am
sorry, I intended to say I'd never go the new table and rename route - why would you want to do that to a table with say 80 million rows?
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 17, 2012 at 8:14 am
I would agree on this one. I would just go the route of altering the column; this statement is supposedly best optimized to handle this sort of change.
Related it to this is the challenge of how to best manage the transaction-log load and size during this process.
Would it make sense to do the following in a TEST/QA environment, in order to get a handle of how much the log size will change (if at all) when applying the same change in production?
(1) Ensure database is in FULL recovery (if in FULL recovery in prod)
(2) Configure transaction-log backups with same frequency as in prod; this will emulate the prod env and will help predict any likely log-file changes (and determine whether more disk space needs to be added in prod prior to the script being applied there to accomodate a log file increase).
Thoughts?
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
February 17, 2012 at 11:26 am
It makes sense to test as you have outlined.
I am thinking that you won't find much in the way of usage of the transaction log. Altering a column from INT to BIGINT should only be a meta-data change and quite quick.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
February 17, 2012 at 11:42 am
Jeffrey Williams 3188 (2/17/2012)
It makes sense to test as you have outlined.I am thinking that you won't find much in the way of usage of the transaction log. Altering a column from INT to BIGINT should only be a meta-data change and quite quick.
It is safe but not quick. I did it on a table that ran over 2B rows and it took 40 minutes on fast hardware.
February 17, 2012 at 4:53 pm
Revenant (2/17/2012)
Jeffrey Williams 3188 (2/17/2012)
It makes sense to test as you have outlined.I am thinking that you won't find much in the way of usage of the transaction log. Altering a column from INT to BIGINT should only be a meta-data change and quite quick.
It is safe but not quick. I did it on a table that ran over 2B rows and it took 40 minutes on fast hardware.
The problem is that the column is used as an FK to a bunch of other tables. So a bunch of constraints need to be dropped, the data type changed in all related tables and the constraints reapplied. On a QA machine this took 20 hrs (it's a VM, so I'll say no more). On the developer's desktop it took 7 hrs. In prod it will probably take less, but it's impossible to predict how long. I'm at least trying to get a handle on the final log size by setting up the log backups in the test env.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
February 17, 2012 at 8:33 pm
Marios Philippopoulos (2/17/2012)
Revenant (2/17/2012)
Jeffrey Williams 3188 (2/17/2012)
It makes sense to test as you have outlined.I am thinking that you won't find much in the way of usage of the transaction log. Altering a column from INT to BIGINT should only be a meta-data change and quite quick.
It is safe but not quick. I did it on a table that ran over 2B rows and it took 40 minutes on fast hardware.
The problem is that the column is used as an FK to a bunch of other tables. So a bunch of constraints need to be dropped, the data type changed in all related tables and the constraints reapplied. On a QA machine this took 20 hrs (it's a VM, so I'll say no more). On the developer's desktop it took 7 hrs. In prod it will probably take less, but it's impossible to predict how long. I'm at least trying to get a handle on the final log size by setting up the log backups in the test env.
The best of luck with your project. Been there, done that, and mine was not even the FK, so I did not have to recreate it.
February 18, 2012 at 6:07 am
Thanks, I'll need all the luck I can get. 🙂
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply