Best stategy for updating a Table with unique id and datastamp

  • Hi Guys,

    I'm working with SQL Server 2005 and need to deploy table changes to the live database which has existing data. All I'm doing is adding a couple of columns to the database.

    I used to make the changes manually this always turned out quicker when It was me making the changes. But now I have to create a deployment script and hand over the depolyment to a network team. They don't have any knowledge of SQL Server so I need to make a script up for them.

    So what I'm looking for is the best stategy making the changes to the database table whilst maintaining the current data. The table has a unique id and a timestamp column. Also I need to have a rollback script just in case I need to back the changes out.

    My idea was to perform a select....into to copy the table data, then drop and recreate the table that way I get exactly what I need nothing will be missed. Then insert the data from the backup data. I would make a drop and create script of the table prior to update for the back out just in case. I believe this would fail as the create table has the constraints in it and therefore I would be unable to insert the same unique id and a timestamp data as it was before.

    Is there a better/easier way to do this?

    Thanks guys.

  • For this type of change, or any other, first, backup the database.

    I'd create your new table, blank, with a different name (mytable_new). Then migrate the data into it from the old table. Make sure you have some kind of validation that this worked inside the TSQL (row count, something). Then, drop the constraints on the old table. Rename the old table to something else (mytable_old). Then rename the new table with correct name (mytable). Then create the constraints (indexes, triggers, what have you) on the table. Stop there.

    Now, you have the new table, in place, with the correct constraints, but you've still got the old table, in place, just in case. Once all validations are complete, no issues, everything is good, you can drop the old table as a seperate step.

    That's how I'd go about it.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I like Grant's advice. Note that if you have enterprise edition, I might use a Database Snapshot instead of a backup for a quick rollback if that is possible.

    Note that in the drop constraints/rename table, enable constraints, I'd do this in a script, and if possible, in a transaction.

  • Steve Jones - SSC Editor (1/3/2011)


    I like Grant's advice. Note that if you have enterprise edition, I might use a Database Snapshot instead of a backup for a quick rollback if that is possible.

    Note that in the drop constraints/rename table, enable constraints, I'd do this in a script, and if possible, in a transaction.

    Good catch Steve. This is actually how I do it, I just keep forgetting the details of it. Yes, I backup, but I use snapshot to do it.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Hi Guys,

    Thanks for the replies. That's the sort of thing I was looking for. What I would like to know if though, you mentioned using the snapshot, from the documentation it mentions that, "during a revert operation, both the snapshot and the source database are unavailable". I wouldn't be able to do this as the database is used by a plant in malaysia and so near impossible to stop for any period of time.

    Would it be easier/more efficient for me to take a backup of the tables I'm altering and manually restore my data from there or to use the snapshot. Can snapshot's be taken of selected tables to make them smaller?

    Thanks,

    Iain

  • ibzmiller (1/4/2011)


    Hi Guys,

    Thanks for the replies. That's the sort of thing I was looking for. What I would like to know if though, you mentioned using the snapshot, from the documentation it mentions that, "during a revert operation, both the snapshot and the source database are unavailable". I wouldn't be able to do this as the database is used by a plant in malaysia and so near impossible to stop for any period of time.

    Would it be easier/more efficient for me to take a backup of the tables I'm altering and manually restore my data from there or to use the snapshot. Can snapshot's be taken of selected tables to make them smaller?

    Thanks,

    Iain

    Snapshot and snapshot recovery minimize downtime. You would perform the snapshot, or a backup, as a safety precaution, not simply to protect the one table that you're manipulating, but just in case something you did completely destroys the database. The likelihood of this happening is almost zero. Note that one word, almost.

    As far as protecting the table, keeping the copy available until everything is validated is really all that you need to do (in addition to creating the snapshot, just in case). Unless you're using a 3rd party tool like Red Gate HyperBac or Virtual Restore, you can't simply recover a single table without restoring the full database to some location and then retrieving it.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • That's great Grant. Thanks so much for the information. Really helped.

    I can't see how to mark you as having answered my question. Do we do that on this site?

    But I'll do this.

    ANSWERED:exclamationmark:

  • ibzmiller (1/4/2011)


    That's great Grant. Thanks so much for the information. Really helped.

    I can't see how to mark you as having answered my question. Do we do that on this site?

    But I'll do this.

    ANSWERED:exclamationmark:

    Nah, we don't have that over here. That's on ask.sqlservercentral.com (another great place to go for targeted questions & targeted answers, not so much for discussion like we had here).

    Thanks though. I'm glad I could help a bit. And Steve gets credit too.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • No problem. Yeh definitely thanks to Steve too. Appreciate it lads.

  • ibzmiller (1/3/2011)


    Hi Guys,

    I'm working with SQL Server 2005 and need to deploy table changes to the live database which has existing data. All I'm doing is adding a couple of columns to the database.

    ...

    One thing I'd like to add is that it's possible to add or drop columns without going through the additional steps of re-creating and re-inserting the table. This is a relatively quick and safe operation. Alocated space for a dropped column won't be freed until the clustered index is rebuilt, but that will be handled during the next regularly scheduled maintenance and need not be done at the time of deployment.

    use tempdb;

    create table temp1

    (

    temp_id int not null primary key identity(1,1),

    temp_timestamp timestamp,

    temp_name varchar(120) not null

    );

    go

    insert into temp1 (temp_name) values ('Matthew');

    insert into temp1 (temp_name) values ('Mark');

    insert into temp1 (temp_name) values ('Luke');

    insert into temp1 (temp_name) values ('John');

    go

    alter table temp1

    add status_code int not null default(0);

    go

    alter table temp1

    add last_name varchar(180) null;

    go

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Hi Eric,

    I had approached it this way initially. I had a script ready and everything. But thought that maybe it would be cleaner to delete/less prone to error and create in its entirety. I'm primarily a .net developer so need a bit of guidance on SQL Server best practices.

    USE [WIPTRACK]

    GO

    SET ANSI_NULLS ON

    GO

    SET QUOTED_IDENTIFIER ON

    GO

    BEGIN TRANSACTION altrtable1

    ALTER TABLE [dbo].[wp4tbl_TblMiniPacking_Log] ADD [reason] [nvarchar](20) NOT NULL;

    GO

    ALTER TABLE [dbo].[wp4tbl_TblMiniPacking_Log] ADD [loggedby] [nvarchar](50) NOT NULL;

    GO

    ALTER TABLE [dbo].[wp4tbl_TblMiniPacking_Log] ADD CONSTRAINT [DF_wp4tbl_TblMiniPacking_Log_reason] DEFAULT (N'DELETE') FOR [reason]

    GO

    ALTER TABLE [dbo].[wp4tbl_TblMiniPacking_Log] ADD CONSTRAINT [DF_wp4tbl_TblMiniPacking_Log_loggedby] DEFAULT (N'CURRENT') FOR [loggedby]

    GO

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'This field will hold the reason why the record finds itself in the logging table.' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'wp4tbl_TblMiniPacking_Log', @level2type=N'COLUMN',@level2name=N'reason'

    GO

    EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'This holds the user that performed the action against the wp4tbl_TblMiniPacking table.' , @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE',@level1name=N'wp4tbl_TblMiniPacking_Log', @level2type=N'COLUMN',@level2name=N'loggedby'

    GO

    IF @@ERROR > 0

    begin

    ROLLBACK TRANSACTION altrtable1

    SELECT 'FAILED!'

    end

    else

    begin

    COMMIT TRANSACTION altrtable1

    SELECT 'SUCCEEDED!'

    end

    Cheers, Iain

  • ibzmiller (1/4/2011)


    Hi Eric,

    I had approached it this way initially. I had a script ready and everything. But thought that maybe it would be cleaner to delete/less prone to error and create in its entirety. I'm primarily a .net developer so need a bit of guidance on SQL Server best practices.

    I'm not sure how were defining 'cleaner' and 'less prone to error', but that's the advantage I see in using ALTER TABLE to add the columns. I've never had a table schema modification bomb out on me when using ALTER TABLE, but it has happened a few times with the more complex create / insert / drop method, becuase there are just more assumptions and more things that can go wrong. You can also do things like querying the INFORMATION_SCHEMA tables to verify if obejcts already exists, which can make a script less prone to error.

    if not exists

    (

    select 1 from information_schema.columns

    where table_schema = 'MySchema' and table_name = 'MyTable' and column_name = 'MyColumn'

    )

    begin

    alter table MySchema.MyTable

    add MyColumn varchar(180) null;

    end;

    GO

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Hi Eric,

    True enough verifying the columns through Information Schema is a good move for sure.

    Thanks

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply