December 14, 2006 at 4:29 pm
I'm not certain where the best place to post this so I figured "General" would be my best bet for now. If anyone has a better suggestion, let me know.
I was recently tasked with putting together a plan to periodically update the data in our beta environment to be in sync with our production environment. I ran into a bit of problem when it came to the requirement that it had to allow schema changes in the beta environment that wouldn't exist yet in the production environment where the data was coming from. Does anyone have any suggestions as to the best way to copy the data successfully while preserving the schema discrepancies?
December 15, 2006 at 4:16 am
I run a similar process and I use an automated process that uses bcp out, bcp in, using records in a table (SourceServer varchar(100), SourceDatabase sysname, SourceTable sysname, DestinationServer varchar(100), DestinationDatabase sysname, DestinationTable sysname).
In order to overcome the schema differences you're going to need to use views in the source/target environment.
Hope it helps.
Adi
December 15, 2006 at 1:42 pm
This can be some fun stuff to try to do. Good luck!
What I've done in the past is approximately like so:
I create a staging database on the destination server.
Then I run a script. Actually a series of scripts, but logically, it is a script....
I insert data to the staging database that I want to preserve.
I delete the target database. This is a safe move because I've gathered all of the data that I need in the previous step.
I restore the production data into the target database.
I apply the scripts from VSS (source control) to change the database structure to the beta version.
I apply the scripts that merge the data from staging into the new beta environment.
I'm done.
Now the hard part is getting the scripts right. And getting the developers to record all of the database work as change scripts in the source repository.
This general methodology has a distinct advantage over moving data between environments. Specifically, it is almost exactly like the process that will be needed to deploy the beta into production. Not only does it refresh the data, but it actually tests the deployment process.
YMMV
jg
December 15, 2006 at 1:51 pm
I forgot a couple things.
In general, I would also back up the beta before destroying it, just in case some developer forgot to get everything into source control.
I would also script out all of the table structures, udfs, sprocs, etc, because it's easier to find a sproc in a script than it is to restore a db for one sproc. Especially with a large database.
This helps weed out the developers that are too lazy to follow coding procedures. After sweating them a few times over a lost change to a sproc, they tend to get it together. heh
jg
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply