November 5, 2008 at 2:49 am
I have a big database (400 GB) with a big table (120GB).
In a development server, I partitioned the big table into 5 partitions (5 filegroups, 1 file per filegroup), and I left intentionally a significant part to simulate the periodical maintenance process of splitting.
The recovery model is set to simple, to minimize the log growth.
Then, I split the remaining data rows into a new created file/filegroup.
I have finally 6 partitions, balanced in size by number of rows.
The result is correct, but the transaction log grows until 220 GB, even with the simple recovery model.
¿What is happening?
Thanks in advance.
November 6, 2008 at 7:15 pm
Design your partition table in such a manner that you always split a partition which contains no data. Same goes with Merge. Otherwise your split and merge operation will consume high IO and will result in big transaction log file.
For eg. lets say that i have to retain only 50 days worth of data.
Then i will create 52 partitions and the first and last will always be empty
August 12, 2010 at 2:42 am
I am having similar problems, with just partitions in general, not partitioned tables. Some of my partitions have no data on them as yet but the log size gets huge even on Simple Recovery and the partitions themselves hog large amounts of unused space whereas it was not so severe a problem when it was a single .mdf file. I'm wanting to avoid regular shrinking. Interesting what you say Nikhil; do you have some reasoning for adding the blank partitions or did you discover this through experience?
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply