Modifying a column datatype

  • Hi Experts,

    I need to change the datatype of a column from INT to BIGINT. The column is having IDENTITY and is the primary key in that table having a non-clustered index in it. The column is part of another index having multiple columns and included columns in it.

    The table size is 756GB and its having 1564469196 rows.

    Please let me know the fastest way to achieve the same.

  • Any other dependencies? i.e the table is replicated? Etc?

    You could drop the primary key and the NCI's, and issue the ALTER TABLE <<TABLE>> ALTER COLUMN <<Column>> bigint - When complete, recreate the PK and then the indexes.

    Or if space isn't an obstacle, create a new table (Table_tmp) with the proper data type, load it with the old data, then drop the existing table.

    Or add a new column as bigint, drop the PK and the NCI's, update the new column with the values from the existing int column, recreate your PK, then your NCI's.

    Either route will probably take a while with such a large amount of rows.

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • Thanks Jessie. We have tried all the way and its taking time.

    Tried loading data to a new table with new structure failed because we ran out of space.

  • Is there any difference if i load the data using SELECT * INTO and using an SSIS package to load??

  • To be honest, I'm not 100% sure...I guess it would depend on how you are doing it via SSIS. I would assume with the SSIS approach you are using a TABLOCK and BULK INSERT?

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • Thanks Jessie.

    Is there any issue if i create a new table with new name and create all indexes and move data ,any problem if i rename the DB afterwards,any issue with index or constraints

  • Modifying data structures by creating a new table, moving the data, rename the old table, drop all the old constraints, recreating all constraints and indexes, rename the new table, drop the old table, is a pretty standard mechanism. No, there are no problems with it per se. You just need to ensure that you don't forget constraints or indexes when migrating everything over. It does mean you'll be storing roughly double the amount of space for a period of time, so you have to have that available. It also means, if you move everything in one step, that you need enough log space to hold that transaction. Otherwise, you need to break up the data move into smaller chunks. But the overall approach is fine.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thanks Grant...

    You are definitely giving confidence..

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply