June 11, 2009 at 6:32 pm
Hi all
I'm looking for some general advice and hope someone can help 🙂
I have a database which is approximately 15Gb, with a logfile that is usually sized at 500Mb. Once per week the database is reindexed using a stored proc supplied by the vendor - essentially the stored proc cursors through the tables and runs DBCC DBREINDEX against each
As result of this the log file grows to about 14Gb
Given that this procedure runs once per week, is it advisable to set the logfile at, say, 15Gb and leave it at that - or is there any merit in shrinking it to a smaller size? Apart from the reindexing, the 'original' 500Mb size appears to be sufficient for normal database operations - it is not a highly transactional database (i.e. not connected to a point of sale type of front end)
(edit: the database is mirrored, so changing to simple recovery isn't an option)
thanks in advance for any help 🙂
June 11, 2009 at 7:39 pm
Hello,
Can you afford the 15GB space? If so, then probably best to set the Log file at that size. It would prevent another process (or user) from grabbing the free space and then causing problems during your DBReindex process.
Also may be worth taking a look at this article:-
Regards,
John Marsh
www.sql.lu
SQL Server Luxembourg User Group
June 11, 2009 at 7:51 pm
you don't need to rebuild the indexes of all the tables in a database....you can use a script to re-index the tables which have fragmentation >30% and reorganize the indexes with fragmentation > 15%. that helps you a lot in reducing the log file size...
June 11, 2009 at 8:24 pm
Thank you both for your input - I will take a look at the link you posted John.
There is plenty of disk space, so that's not an issue. The mirroring doesn't appear to be affected by a 15Gb log so I will probably set it and leave it at that size unless any other information comes to hand that suggests otherwise
ssismaddy: at this stage I am required to use the maintenance plans supplied by the vendor - so all tables will be reindexed whether fragmented or not.
Thanks again 🙂
anyone else please feel free to comment
June 29, 2009 at 8:14 pm
one other related question:
because the log file has grown from 500Mb to approx 14Gb in 500Mb increments, the number of virtual log files is approx 250.
Should the number of virtual log files be reduced to, say, less than 100 - and if so what it the best method of acheving this - bearing mind that the database is mirrored (SQL 2005), so setting to Simple or Bulk Logged is not an option
My initial thoughts were to shrink the log back to 500Mb (DBCC SHRINKFILE(myLog, 500)) and change the growth increment to 2Gb - so next time the database is indexed it will grow in larger increments (and thus with less virtual logs). Then, once the log has grown to the size that it needs for weekly reindexing, just leave it at that
Can anyone advise whether this is a rational approach - or suggest a better one 🙂
TIA
June 30, 2009 at 7:13 am
I've used the method to set your DB in "bulk logged" recovery mode before the index rebuild and then set it back to "full" after.
This does not break your log chain. From a post last week, I learned that you cannot do PiT recovery for this one log but can for all logs after.
The bulk Logged model will minimally log the rebuild operation.
Tim White
June 30, 2009 at 7:28 am
As the database is mirrored FULL recovery mode is the only allowed option.
Ivanna, If you want to reduce your number of virtual logs wait for a quite time. shrink the log right down and grow it in one chunk to 15GB. Check first that you have 15GB of contiguous disk space otherwise the log will still be fragmented at the drive level
If I remember rightly reducing the number of virtual logs only helps with backup and recovery times? so unless you have a problem here this might be overkill.
---------------------------------------------------------------------
June 30, 2009 at 7:32 am
Also:
ssismaddy: at this stage I am required to use the maintenance plans supplied by the vendor - so all tables will be reindexed whether fragmented or not.
why? Vendors can be very defensive! 🙂
---------------------------------------------------------------------
June 30, 2009 at 7:36 am
so sorry, missed the mirrored part. George is 100% correct....
Tim White
June 30, 2009 at 1:40 pm
Thank you both for your input
George, I think at this stage I will just monitor the logfile VLFs and if these grow in number will take the action that you suggested.
I also note your point about VLFs mainly affecting backup / recovery times - I'm not very knowledgable about the inner workings of SQL yet (although learning a lot on this site!) - however given that the database is mirrored, and the mirror is in a perpetual state of recovery I did wonder if reducing the number of VLFs to <100 would help improve performance a little, although performance is not an issue at this stage
Thanks again 🙂
June 30, 2009 at 3:07 pm
Ivanna thats what I would do. If it aint broke don't fix it..
I think you only need to be concerned if database recovery time at server restart seems slow, or restores take too long.
As for the mirroring that's not quite a recovery in the sense of a database restore where the log has to be scanned to find LSNs to roll forward, it is just sequentially applying transactions as they come across so I would not think there is scanning of the transaction log. Cannot be 100% sure though.
If you have a latency problem with mirroring cannot harm to have fewer virtual log files.
make sure you have a sensible growth factor on your log file now (100 - 200MB)
---------------------------------------------------------------------
June 30, 2009 at 3:25 pm
How often are you running transaction log backups?
Running them more often prevents the transaction log file from growing so large. I usually setup transaction log backups to run every 15 minutes, 24x7, and sometimes have then as often as every 5 minutes.
June 30, 2009 at 3:38 pm
Hi Michael
I'm running transaction log backups every hour - which has been sufficient to maintain a relatively small log file (500Mb) for daily transactions. The growth of the logfile to 14Gb occurs as a result of reindexing the database (once per week).
I discovered recently that a few hours after the weekly reindexing job ran, another job (created by persons unknown a few months ago in my absence!)ran to shrink the database - effectively undoing the reindexing, but it did shrink the log back to 500Mb. I have since disabled the database shrink as I understand that to be bad practice (esp after a reindex) - so now the log size remains at 14Gb and seems to be sufficient to accommodate the reindexing
Given that there is sufficient disk space for a 14Gb log, I have left it at that size rather than growing and shrinking it every week
June 30, 2009 at 5:07 pm
Shrinking Database or just the log?? I think you can schedule a job to truncate and shrink the log file and take a full backup immediately...Just an idea, I don't know if it is really possible for you...!
June 30, 2009 at 5:36 pm
ssismaddy:
Shrinking Database or just the log??
The job used DBCC SHRINKDATABASE - so both database and log files were shrunk.
I have considered shinking the logfile after reindexing - but decided not to, as it would only need to grow again when the weekly reindexing job runs. Seems to be working ok for now, but will keep an eye on log growth
Thanks for your input 🙂
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply