Best Approach to Archieve DB

  • A simple and quick way to archive data.

    -- Simple table to populate with data

    CREATE TABLE [dbo].[Current MPANs]( [MPAN] [char](13) NOT NULL PRIMARY KEY )

    CREATE TABLE [dbo].[Old MPANs] ([MPAN] [char](13) NOT NULL PRIMARY KEY )

    go

    create view [Archive MPANs]

    as

    Select top 10

    MPAN

    from dbo.[Current MPANs]

    order by MPAN

    go

    Delete from [Archive MPANs]

    output deleted.MPAN into [Old MPANs]( MPAN)

    go

    select count(*) from [Current MPANs]

    select count(*) from [Old MPANs]

    With this type of method an index is needed to make certain that a table scan isn't done every time the view is called.

    If this is scheduled to run every minute then the impact on the server is small. Check the query plan to see what is happening. But consider that there are over 43K minutes each month so 23 rows a minute will equate to 1 million records.

    Having worked on VLDBs (or they were considered to be at the time) not all solutions need the big bang approach 😉

  • You do not need a view for delete-insert command. Making the code even simpler:

    INSERT INTO destination

    SELECT *

    FROM

    (

    DELETE TOP (100)

    FROM source

    OUTPUT DELETED.*

    ) AS t

    Complete example:

    use tempdb

    GO

    -- prepare test data to transfer

    select id = o.object_id, o.name

    into source

    from sys.all_objects o

    -- prepare destination table

    select *

    into destination

    from source

    where 1=0 -- impossible condition

    GO

    -- start moving the rows

    declare @source_rows int, @destination_rows int, @rows int, @rows_per_batch int=1000

    set nocount on

    WHILE 1=1 BEGIN

    -- Print number of rows

    select @source_rows = sum(p.rows) from sys.partitions p where p.object_id = object_id('dbo.source')

    select @destination_rows = sum(p.rows) from sys.partitions p where p.object_id = object_id('dbo.destination')

    RAISERROR('source_rows = %d, destination_rows = %d', 10,1, @source_rows, @destination_rows) WITH NOWAIT

    IF @rows < @rows_per_batch BREAK

    -- Let's do the move, babe!

    INSERT INTO destination(id, name)

    SELECT t.id, name

    FROM

    (

    DELETE TOP (@rows_per_batch)

    FROM source

    OUTPUT DELETED.*

    ) AS t

    SET @rows = @@ROWCOUNT

    END

    GO

    /* -- cleanup

    drop table source, destination

    */

    Output:

    source_rows = 2002, destination_rows = 0

    source_rows = 1002, destination_rows = 1000

    source_rows = 2, destination_rows = 2000

    source_rows = 0, destination_rows = 2002

    _____________________________________________________
    Microsoft Certified Master: SQL Server 2008
    XDetails Addin - for SQL Developers
    blog.sqlxdetails.com - Transaction log myths
  • Vedran Kesegic (3/25/2013)


    You do not need a view for delete-insert command. Making the code even simpler:

    INSERT INTO destination

    SELECT *

    FROM

    (

    DELETE TOP (100)

    FROM source

    OUTPUT DELETED.*

    ) AS t

    ...

    Thanks for that. And yes I'm kicking myself for missing this alternative.

  • runal_jagtap (3/19/2013)


    Jeff, No we would never need a data a month old, incase if someone needs the data we can retrieve it from Archive DB..

    but whats the best approach???:w00t:

    Would it in this case not be easiest to backup the data on a predetermined basis to removal media for storage and truncate the source tables?

Viewing 4 posts - 31 through 33 (of 33 total)

You must be logged in to reply to this topic. Login to reply