Table backup

  • Thanks to all for your help!

  • Just another note to contribute with this matter.

    If the table is to large any changes you make is going to take a toll in your database. What I will do, I have done it with a very large tables, is this. I create the table in an empty database with no dat at all. THen, I make the chages and last inport the data into the new table. If everything works fine, I drop and create the table in the original database and DTS the data back to the production system. Kind of complicated? Maybe, but that way I can play with the table before I do anything at the pruoduction system. I do play safe most of the time!!

    All other comments are very good. I just wanted to add this little input because I have quite a few very large tables and if any one needs to add just an index two things happen. The first one is the time to process the new index and second the groth of your log could increase very very fast.

    Good luck!!

  • quote:


    SELECT INTO doesn't generate tlogs like an INSERT statement does.


    I've found this to be true, but ... SELECT INTO seems to always create every column with a NULL allowed--so you can't create a primary key! Does anyone know if there's a way to make a copy of a table (while changing some or all column datatypes) and NOT log all the inserted rows, and allow NOT NULL on selected columns?

  • Why not to make an alter table alter column ColumnName NOT NULL after the insert into statement.

    So you can create your primary key



    Bye
    Gabor

  • To stop SELECT INTO from creating columns as nullable use the ISNULL function on the columns e.g.:-

    select isnull(id,0) as id,isnull(status,'') as status into tableACopy from tablea

    It's a bit of a messy syntax, but it works.

  • quote:


    Why not to make an alter table alter column ColumnName NOT NULL after the insert into statement.

    So you can create your primary key


    Because that will copy the whole table again.

  • quote:


    Why not to make an alter table alter column ColumnName NOT NULL after the insert into statement.

    So you can create your primary key


    Because that will copy the whole table again.

  • Well, now I see what happens when the system is slow and I hit Submit twice. Oops!

    quote:


    To stop SELECT INTO from creating columns as nullable use the ISNULL function on the columns e.g.:-

    select isnull(id,0) as id,isnull(status,'') as status into tableACopy from tablea

    It's a bit of a messy syntax, but it works.


    Well, blow me down. I had already thought of something like that, but concluded it couldn't help because that column ALREADY was defined as NOT NULL in the original table. How could I expect any special code to improve on that?

    But your solution does indeed work! So SELECT INTO seems to be a bit on the dumb side, paying attention to most existing column attributes but ignoring NOT NULL. Many thanks!

  • Many good ideas, depending on the one you take be mindful to check for any triggers.

    And just the same I'd still take a backup, even if I have a fall back copy of the table. Nothing like having a dependancy gottcha in the middle of an upgrade.

    Better to be safe then sorry.

    John Zacharkan


    John Zacharkan

  • OR...

    you could do the following:

    backup the table with the usual method of making a copy within the same database using the following:

    select into (table_backup_copy) from (original_table)

    then simply go to the original table under QA, and right click, select the 'create' option, then just grab the index creation syntax.

    _________________________

Viewing 10 posts - 16 through 24 (of 24 total)

You must be logged in to reply to this topic. Login to reply