Changing an ID interger type field to a bigint field in a HUGE table

  • Hi 🙂

    I need some geniouses enlightment please ^^

    Here's the problem: One huge table (600 Go, close to 5,1 billiions rows) bad designed in one of our old babies (sql 2005), has an index file bigger than the data file itself, not partitioned.. :/

    The question is, due to the values limitation of an integer type (+/- 2,5Billions) and with such a big table, how can we change this integer field to a big integer field in the most convenient way?

    I thought about creating a temporary db, with a flat table which contains the big int ID field, without any constraints and no indexes, and doing some bulk insert for the last year only (requirement), and to reseed it to start at the last max ID value of the old table, then creating/building the indexes again and moving the files from one server to the other in order to replace the old table. But this could take weeks for 2 or 3 billion rows.. so if you have any idea how to do that in faster and easier way... you're more than welcome.

    Thank you for your time 🙂

  • is there is any dependecies on this table?

    Abhijit - http://abhijitmore.wordpress.com

  • Is there any reason you don't just alter the column?

    ALTER TABLE <tablename>

    ALTER COLUMN ID BIGINT

    And then just reseed to your new starting number.

    I've never done this on a 5 billion row table, but it works in my test table..

    drop table test

    go

    create table test ( id int identity(1,1) , a varchar(1))

    INSERT INTO test (a) SELECT 'a'

    INSERT INTO test (a) SELECT 'b'

    INSERT INTO test (a) SELECT 'c'

    alter table test

    alter column id bigint

    INSERT INTO test (a) SELECT 'd'

    INSERT INTO test (a) SELECT 'e'

    INSERT INTO test (a) SELECT 'f'

    SELECT * FROM test

  • Thank you for a quick feedback 🙂

    -> yes there are a lot of objects that depend on that table, but the table does not depend on any other objects.

    -> I'm afraid that an "alter table" would take weeks to process and could fail.. though I'm going to try a test for a one billion row table for example, I'll post back how It goes...

  • Louis Ventura (9/14/2010)


    Thank you for a quick feedback 🙂

    -> yes there are a lot of objects that depend on that table, but the table does not depend on any other objects.

    -> I'm afraid that an "alter table" would take weeks to process and could fail.. though I'm going to try a test for a one billion row table for example, I'll post back how It goes...

    Ah, with the example you gave, I figured the time/space wouldn't be an issue..this will be significantly faster than copying the whole table twice, and obviously much less space.

    Also, as far as I know, this only takes a SCH-M lock on the table..so you wouldn't even necessarily need downtime, although I would highly recommend it of course.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply