July 9, 2009 at 6:02 am
We have a DB of 150+ GB. 90% of the database is stored in two tables which are not needed 90% of the time when we send a copy of a database to the vendor for support/dev reasons or when we backup & restore to other instances.
Everything currently sits in one filegroup. My proposal is to spilt into two file groups putting the two large tables into their own group. I will then only backup/restore the main file group for the 90% of the time I don't need the two tables of data.
My question is, this is easily achieved by marking the secondary filegroup as offline in the restored environment but what I am unclear on is I wish to have this file group online but with empty tables in it, will it allow me to restore the second file with no data in the two tables? The application would not error as it woudl just return no rows (no foreign keys to anything in the main filegroup) but would allow extra data to be added. Test records being added.
In addition to this, to move the tables into the new file group I can't rebuild the indexes as they have an image column in them. What is the best way to acheive this with a large table (large binary not large row count)? Insert into?
Thanks,
Steve.
July 10, 2009 at 9:05 am
Could you consider to partition your tables?
July 10, 2009 at 9:58 am
Does the data in the two tables change often? If so, I don't think this works.
You've be better off restoring a copy of the database, then truncating those two tables.
July 10, 2009 at 3:42 pm
The production database has to have all the records in it, so partitioning doesn't work as you would have to restore the same config ie the same size to other environments. 'The management' don't wish to invest more money in SAN storage either as its arguably wasted and very expensive. We have a high level of change so we have several copies of these databases supporting parallel dev.
The two tables are used for every transaction and store documentation related to each. We are also restricted to changes in the way the core application works as it's a package solution.
The time taken to restore the full database and copy it around our network is huge. Just looked and its over 200gb now. Made worse as our datacentre doesn't have any SQL servers big enough so we have to delete the data on a small box under a desk with cheap slow storage.
I have had a play around this afternoon and got a way forward. Going to write it up and create an article as not found any other articles.
What you can't do:
- you can't restore secondary groups which don't match the primary. Ie out of sequence restores.
- you can't keep a secondary file contents offline and replace the object with an empty one as the object still exists.
My solution:
Move the two tables to the a newly created filegroup/file. Create two new tables called '+DEV on the primary filegroup, Backup the primary only and restore the primary to the dev instance with partial restore.
The DB can be used by the users albeit any calls to the two tables will return an error as they are offline.
The package solution is largely Stored Procedure based. We will have the vendor change the affected SP's to lookup a config table to derive the correct docs table to use based on a field value. The application has no referencial integrity to the docs tables. It will show no records for existing transactions but it will allow new records to be created.
Will write it up and post it somewhere.
Thanks for your thoughts.
Steve.
July 10, 2009 at 3:44 pm
Sorry for the duplicate posts Steve. Forum had a mare.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply