September 14, 2012 at 8:57 am
Greetings --
I'm trying to not reinvent the wheel here, and could use some help from those of you that might have run into something like this.
We are running SQL 2008 R2 Standard.
Our company will be moving to a new production server in October. I have been tasked with figuring out a solution for refreshing our development environment from production.
A top level view of what needs to be done:
1. BACKUP databases
2. compress BACKUPs (could be part of Native SQL Backup)
3. FTP compressed BACKUPS to our development environment
4. decompress/RESTORE in the development environment
The production server is available only through a VPN.
At the moment, the total compressed size of all backups is ~70GB, and is expected to grow.
The databases contain sensitive financial information, all of which will be deleted after the RESTORE. I had suggested investigating log shipping, but this won't fly, as they don't want to store the sensitive data on site, as would be the case using WITH STANDBY.
Since there are dependencies, i.e. we don't want to FTP until the BACKUP is completed, and we don't want to RESTORE until the FTP has completed, I think that some form of encapsulation is required.
Perhaps a Powershell solution might be possible, but before going down the road too far, I wanted to see how others have solved this issue.
Thanks in advance --
SQLNYC
September 14, 2012 at 9:34 am
There are any number of FTP utilities you can use to accomplish this. Once you get the file onto the dev environment, decrypt it and put it into another location (by way of saying encrypt the backups before you move them between environments).
Then have a SQL Server job do a check to see if the file exists in that directory. The file check job kicks off the restore job and the restore job contains a data scrubbing step, a permissions update step, and any other steps you may need.
The best way to do this is schedule the backup sending at a certain time each day. Then give yourself some cushion (so as the backups grow the job won't fail by trying to lock onto the file while the file is locked by the decrypting / moving process). At that point, have a recurring job that checks for the file every X minutes within a Y hour timeframe. We use every 30 minutes over a 4 hour time frame. This gives the files a chance to decrypt properly.
September 14, 2012 at 9:50 am
You need to also take care of LOGINS/USERS.
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
September 18, 2012 at 2:29 pm
Thanks very much for your replies --
SQLNYC
September 18, 2012 at 7:28 pm
You also may want to consider using the WITH COPY_ONLY for the backups used to refresh dev, so you do not mess up the transaction log sequence of your primary backup\restore process.
September 18, 2012 at 8:53 pm
Hi Tim,
Thanks for your reply - we will be using FTP to copy the regular nightly backup to our dev server, so no need to use COPY_ONLY. But thanks for suggestion.
Best wishes,
sqlnyc
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply