January 30, 2008 at 1:12 pm
The database I am working with is about 55GB total. Much of that is in only a few transaction tables. The indexes on those tables are extremely import to the performace of the reports we run. Recently my database reindexing jobs have starting failing with a log file full error. I suspected the tables causing the problems were the transaction tables. I ran individual reindex commands on each table. After adding almost 10GB to the log file and 10GB to the Database file the reindex process errored out with log file full. The log file is set to grow unrestricted as is the database and there is plenty of drive space.
Any suggestions?
Peter
January 30, 2008 at 1:30 pm
I use a script which I found on this site....
http://www.sqlservercentral.com/scripts/Index+Management/30721/
I've create a few sprocs to go along with this script...
-first, I backup the database
-second, I change the recovery model to simple (this prevents the log file from growing uncontrollably)
-third, I run the above script
-last, I change the recovery model back to Full
I schedule a job with a step for each database, I'm sure you could create top level sproc that loops through each database on your server if that works for you maintenance window.
January 30, 2008 at 1:51 pm
Megan,
Make sure you're running a full backup after you switch back to full recovery model. Otherwise, you have a broken chain and won't be able to do point in time restore.
To the OP:
Make sure your transaction log is backed up before you begin the index maintenance. If you still have issues, you may need to do a drop and rebuild of the problem indexes.
January 30, 2008 at 2:05 pm
Just to confirm for Peter, I stagger our db defrag/reindex for a few databases per night over the course of a week due to time constraints. I do however run a regularly scheduled full db backup via maintenance plan every night...which occurs shortly after defrag/reindex.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply