February 4, 2011 at 9:00 am
We have our Dev, Test, and Live environments.
We want to update Dev and Test to have current data from live.
In the past we just restored Live's backup to Test/Dev.
However, Dev and Test have different sprocs, functions, views, columns, and certain tables (lookup ones for example) that have different data that needs to stay how it is on Dev/Test and not be overwritten.
So we had to do data/schema compares, save those off, then do a restore, and after the restore re-execute the compare scripts to bring Test and Dev back to it's proper structure.
Is there a BETTER way to do this?
Really we have 300 some tables and only 15 of them would NOT want to be updated with data from LIVE.
All the rest we just want to copy the data over.
Likely around 1.2 gigs of data.. perhaps less.
I was looking at using Red Gate data compare, but that appears to be a slow process.
Our table's rows have "LastUpdatedDate" where it stores the results of GETDATE() from whenever the last CRUD was run against that row.
Is there a better way to do this, leverage that field to locate differences, etc...?
Thoughts, suggestions?
Is an ETL package in SSIS a better option here?
February 7, 2011 at 4:35 pm
February 7, 2011 at 8:21 pm
Maxer (2/4/2011)
Is an ETL package in SSIS a better option here?
I think so, you can map the table columns and pump the data into it.
"More Green More Oxygen !! Plant a tree today"
February 8, 2011 at 2:13 pm
If you need to sync the data once a day or at some interval, then look into snapshot replication. If you need the data updated throughout the day, look into ( push or pull transaction data ).
February 9, 2011 at 9:14 am
This would be more of a "once a quarter" or twice a year sort of setup.
If it was more frequent then I agree, replication, log shipping, etc, something like that would make sense.
ETL would be nice using SSIS, but columns will be added over time so I would have to set it up to "automap" or something, otherwise I'd be tracking down changed columns on a regular basis.
February 10, 2011 at 4:24 am
Transactional replication is what we use for a similar scenario.
September 16, 2011 at 2:29 pm
Does that allow for tables with differing structures?
One issue is that on test we may have added a new column, but we would have not done so yet on production.
I am thinking I would need an "exclude" list to deal with differences and handle those manually.
Perhaps an SSIS package to loop through and move the data once a month and if it encounter's differences in the table structure it would write that to a log for review, etc...
September 17, 2011 at 6:02 pm
Keep your data separate and then another database called test_custom_db with all your views etc and sp..your data stored in test_db then your data can be updated from production...but then if your testing wouldn't the data be difference?
September 18, 2011 at 4:10 am
I would go with replication for the non-modified tables.
ETL (ssis?) for the modified.
Last kind was a weekly restore from live to dev.
Everything the developers wanted to be kept would be put in another database, so they could restore (parts of) it after the sync.
*At second thought, how do you deal with the daily refreshes? With replication there is also a chance the production transactionlog will fill up if dev/test if offline.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply