August 10, 2007 at 4:47 pm
I would like to know if there is an easier way, or product to partition an existing/populated table other than creating a new table, filegroups, partitioning function and inserting all the rows from the existing table into the new table, then deleting the existing table after change any foriegn key references to it and point them to the new/partitioned table and renaming the new/partitioned table to the name of the old table just deleted?
August 13, 2007 at 12:19 am
I believe this link shoud solve the problem
http://www.databasejournal.com/features/mssql/article.php/3643726
This link simply moves the table to the new partition with out creating a new table .
Regards,
Nidhi
August 13, 2007 at 8:22 am
The link about does tell you how to do it. However:
I feel the correct answer to your question is no, there is not an easier way.
I have to assume that there is a reason for you deciding to partition the table. Can you help us understand why you have decided to do this?
First off, the parititioning function has to be created by you (and not by SQL Server) so that there is some intelligence behind the partitions. If you don't think about how to split the data, the end result will create you a big mess in performance.
The reason for the multiple file groups is to help spread the I/O load in addition to reducing the number of rows looked at. This is optional, but suggested.
The rest could be automated, and there are tools out there to help with the rest, however, it really isn't that much more work.
The moving of the data is required to force the data through the Function and do the split of the data.
And Indexes, FK's PK's all have to be rebuilt/moved, renamed whatever.
August 13, 2007 at 11:32 pm
But i fail to understand that why do we need to create a new table if we can just move the existing table to the partition scheme. Cause as u mentioned In the link provided
And i do understand that in case we are creating a new table we would have to recreate the FK's and the PK's ....
Please do let me know what are the trade off's involved in the method suggested in the link or why is it required to recreate a table.
Looking forward to a post on this subject.
Regards,
nidhi
August 14, 2007 at 2:34 am
Once you have created the partition function and scheme, you should just be able to drop any clustered index and rebuild specifiying the partition scheme
August 14, 2007 at 6:31 am
Partition info is on the table, not the indexes. You have to rebuild. But if you use the Management Studio you can get it to write the script for you. I don't suggest you letting it do it for you, but if you go into Design/Modify (Depends on version) , f4 (display properties) you can modify the partition function of the table. Then generate change script. It will do the temp table, disable, enable of all fk's
August 14, 2007 at 6:43 am
It is not that your link won't work. It will. However the function has no intelligence about the data. When you choose to partition data it should have some intelligence behind it.
Such as:
By Country, Province(state). Because a service web will direct all connections by IP range to the same servers... Basically a person in WA, will always go to the WA DB. If partitioned by State, the cache for that system won't have data for FL.
Partition by Zip is actually useful. 00000 (east coast) - 99999 (Hawaii) You could break that up in general by geographic region as above.
Another reason to partition is to make even buckets. That means you have to understand the data distribution. Statistics will give you an idea, but then you have to create a function that will do that. I suppose that you could do that in an automated way, but where this hurts you is if your queries often hit a few or most of the data in each partition, your performance will DROP. HARD!
So before you partition you MUST understand how the data is entered and retrieved before a useful partition function can be created.
January 12, 2010 at 1:11 pm
Consider Database Archiving as solution
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply