June 26, 2009 at 10:20 am
Here's my situation, any help is appreciated.
I have a 1TB db whose "Document" table is 500GB by itself. I have a SAN attached server with 6 drives for data, 1 drive for logs, and 1 drive for temp db.
F: - 4 data files
G: - 2 Log files
I: - 10 index files
J: - Backups
K: - Backups
L: - Backups
M: - Backups
N: - Backups
T: - Tempdb
What I'd like to do is partition the Document table across 6 drives with 6 file groups with aligned indexes. I have the function, schema and file groups defined, but I'm having trouble getting 500GB of data from the old unpartitioned table to the new one in a relatively decent amount of time.
I've tried "select into new table (select * from old table) with the new table as a heap, but after 18 hours the resule was 45 GB copied. When I perform this in production I won't have that kind of time.
Any one out there have a similar situation with a better solution?
Thanks!
June 26, 2009 at 12:37 pm
Disclaimer: this is all theory, as I don't have a 500GB table to try this on. It seems reasonable to me.
- Make sure your recovery level is set to simple or bulk-logged.
- Make sure your target disks are defragmented
- Manually create (or grow) each of your data files so they will accommodate the new tables without having to autogrow. This step will take some time, but it doesn't impact your database's availability.
- Create a standalone table for each partition you want to load, complete with indexes, on the correct filegroups
- Using SSIS, bulk-load all 6 standalone tables concurrently, using the appropriate source queries (make sure you use a batch size of 0 and a table lock hint, or performance will degrade significantly)
- Switch all standalone tables into the partitioned table. This step should be nearly instantaneous.
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply