question about backup files

  • Hi,

    I am trying to find the best/simple way to perform this task:

    I have DB which we are making lots of changes each week(under configuration face)

    I want to find a way to generate the scripts base on comparison between existing data and last cut off time, which update the only changes to update other server.

    Here is way I am trying to approach:

    1)make a backup on week1

    2)compare the 2 DB on week2

    3)use compare tool to find out/generate scripts of the differences

    4)Apply these scripts to other server

    My questions are:

    1)Does these sounds the right way to do?

    2)I use SQL server 2005 to back up the DB

    Like this:” BACKUP DATABASE MyDB TO DISK = 'C:\test.bak'.

    But when I try to open test.bak file I got message ”The operation couls not completed. Not enough storage is available to process this command”

    What should I do with this?

    I am new to this. Any help is appreciated

    Thanks

  • Why not just restore the database rather than try to compare schema and data?

    The not enough storage issue is because you are attempting to, I guess, restore the database to a disk which does not have sufficient capacity to hold the backup file. I am not quite sure what you meant by "open the bak file"



    Shamless self promotion - read my blog http://sirsql.net

  • Hopefully you're not using notepad/wordpad to open backup files. They're not in human readable format.

    Why not just script out the database before each deployment? Then you can use that for compares.

    If you go with SQL Compare from Red Gate, it can also compare from backup files.

    Be sure you review the scripts and don't blindly apply changes.

  • Red Gate SQL Compare (along with free Snapshot utility)

    or xSQL Object

    or script out the database and compare in VersionControl system

    etc...

    SQLServerNewbieMCITP: Database Administrator SQL Server 2005
  • Thanks all for the reply and advice

    Why not just restore the database rather than try to compare schema and data?

    we have made a lots update during the week and I want to apply only the changes to the other servers

    Hopefully you're not using notepad/wordpad to open backup files. They're not in human readable format.

    I am trying to use xSQL to open the test.bak file which I created using sql server backup function.

    I am trying to open the backup DB and compare with existing DB then I got error message above 🙁

  • If your concern is to move the incremental data from your production server to the secondary server, you can setup log-shipping where all the changes are applied to the secondary server. However, since your secondary server will be in single user mode, you wont be able to use it regularly.

    Alternatively, You may want to setup transaction replication where all the changes to the primary server (called publisher) is replicated to the secondary server(called subscriber).



    Pradeep Singh

  • Thanks SSC

    Yes, our processes are:

    1) find the new update since last snapshot

    2) create update scripts

    3) once it get approved, apply to other server ( we have interface to perform this)

    I am trying to find best(easy) way for first 2 steps. I need update script not directly updates to other server.

  • If comparision is necessary, you can opt for the tools as suggested in earlier replies which would also help you with information on schema changes.

    *If* you're concerned only about record manipulations and not schema changes and number of tables are few in number, you can think of writting triggers on each table that creates an script for every insert/update/delete fired on that table and stores those 'created scripts' in a 'scripts' table.... however, this will intoduce heavy cost on perfomance...

    I was just wondering... since massive dml operations are performed on ur database, who's going to read each statement and approve it and if (s)he disapproves a particular statement, ur secondary database wont be a replica of ur primary database.

    my other question is, since these dml operations have already been performed on the production database, why would someone want to focus more on what data is getting inserted on the secondary database?

    Going back to your first post on this thread, why do you want to script queries that were fired? you can still make use of logs to update the incremental data in conjunction with a full backup (and a differential one if u've opted for it)



    Pradeep Singh

  • Thansk PS.

    The reason for doing this:

    1) We are initiate a new project and want to keep every configurable records changes we have done

    2) we have a few group of poeple updating the Pri DB, we want ensure we covered all the changes when generate script to update sec DB(they are not "production" yet)

  • 1) We are initiate a new project and want to keep every configurable records changes we have done

    2) we have a few group of poeple updating the Pri DB, we want ensure we covered all the changes when generate script to update sec DB(they are not "production" yet)

    If those are your only concerns, you can rely on backup restore. Backups do not miss out on any committed transaction. All changes made to the database will be present in the backup file and when you restore it to the secondary server, you'll have a replica of the prodution database.

    I'm wondering what you mean by 'configurable records changes'.. Do you mean you are changing the server level configuration too frequently(If yes, what are these changes?) or you are manipulating the tables present in the database? pls elaborate.



    Pradeep Singh

  • I'm confused. Are you trying to capture:

    a) data changes

    b) structure changes

    c) code (aka stored procedures, views, etc.) changes

    d) all of the above

    The methods used to capture (a) are different from the methods you want to use to capture (b) and (c), especially since the types of changes made may require 'managed' data updates to conform to the changes in (b) and (c). For instance, creating a different / new primary key for a table could force you to change data content to prevent duplicate keys. That will require coding and manual processes that are applied, rather than 'blindly' running procedures or fully automated steps.

    It sounds like you are moving in the development --> production direction rather than simply making a copy of production changes for future development work.

    How do you spell 'Change Management'?

  • Thanks PS.

    My intention is not for backing up the database, I am mainly playing with data( sometimes it also involves schema changes). I am trying to use back up as set point from where I can find out what are the newly changes made up to date, then I need create scripts for those changes to update/sync up with other servers

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply