October 3, 2006 at 3:31 pm
October 3, 2006 at 3:47 pm
That's normal. I run that on a 300GB database and the log file increases quite a bit. The thing is, it doesn't increase as much as running DBCC REINDEX (the log will increase around 2.5 times the size of the DB).
-SQLBill
October 3, 2006 at 5:29 pm
Still, given that database in SIMPLE recovery mode and DBCC INDEXDEFRAG is executed as series of short transactions, my expectations would be that log file doesn't grow (at at least grows insignificantly).
October 4, 2006 at 2:40 am
Igor,
How many indexes are included in your batch? I.e. are you defragging a number of indexes at a time or are you dealing with them on an individual basis?
What I tend to do is create a loop that deals with each index in turn - that way you keep the transaction to a minimum when the database is in simple recovery mode.
If you need any more info please ask.
Joe
October 4, 2006 at 3:41 am
Suppose you are defragging index and suddenly you tried to stop/cancel the command, then how will the database be restored to its previous consistent state? The thing here is, evenif the database is in SIMPLE recovery mode, it will log enough information to recover the database. So, if you are trying to defrag a large index, obviously the growth of the transaction log would be larger. Do the indexing one by one, so that transaction log file size can be controlled in between the degragging operations.
Pankaj Khanna
Database Administrator - SQL Server 2000
Keep hope to keep you intact...
October 4, 2006 at 8:03 am
BOL has this to say:
"Every five minutes, DBCC INDEXDEFRAG will report to the user an estimated percentage completed. DBCC INDEXDEFRAG can be terminated at any point in the process, and any completed work is retained."
It would seem that the DBCC command does commit in smaller work units. I'm wondering if there is a bug or feature that prevents checkpointing during the process. Perhaps running with SET IMPLICIT_TRANSACTIONS OFF might solve the problem.
I suppose that you could interrupt INDEXDEFRAG after a few minutes, let the checkpoint happen, and the restart it. Or, just have enough log space to let it complete.
jg
October 4, 2006 at 5:30 pm
Joe, Pankaj, Jeff,
I am defragging one index at a time. In fact this is an automated process that analyzes fragmentation levels and decided whether DBCC INDEXDEFRAX or DBCC DBRECINDEX has to be executed. This is the reason why I would like to avoid any manual interruptions, as Jeff suggested. In fact, we are trying to issue manual CHECKPOINT (we have a job running), but it doesn't help. Pankaj, I personally do not believe that we need to keep a lot of information in the log while doing DBCC INDEXDEFRAX, especially given that it is implemented as short transactions. As Jeff pointed out, you can terminate at any point of time without loosing your job (almost Instantaneously without).
Igor
October 4, 2006 at 5:46 pm
Igor,
I doubt there is much else you can do. I would be interested in knowing if you do find a way of improving on this.
Also I would appreciate it if you would share your formula for deciding when you reindex or defrag. What values do you use to base your decision on? I would like to see what other people are doing.
Good luck,
Joe
October 5, 2006 at 11:21 am
Joe,
I run DBCC SHOWCONTIG. That will show how fragmented your indexes are. Check it out in the BOL.
-SQLBill
October 5, 2006 at 6:26 pm
Joe,
My process is consists of two parts: first step is to collect DBCC SHOWCONTIG output into table for further analysis. It is done for entire database(s). Second step is to analyze scan denstity and logical fragmentation values. I am factoring in number of pages in the table as well. It has to be higher than 1000, otherwise I ignore table. I arbitrary picked thresholds for scan density and logical fragmentation in order to make a choice between DBCC DBREINDEX and DBCC INDEXDEFRAG. The whole process is pretty much automated and requires pretty much no manual intervention.
Igor
October 6, 2006 at 4:39 am
Igor,
I also use an automated process to achieve this and was wondering what threshold values you use. There seems to be little out there on how you can best derive these based on your own systems. Did you simply use Microsoft's recommendations or have you performed some analysis to find values more suited to your own environment?
Joe
October 8, 2006 at 8:51 pm
Joe,
I was using Microsoft atricle as a guideline: http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
In my case, index spans multiple files and according to Microsoft '‘Although scan density can also be an indication of fragmentation levels, it is not valid when indexes span multiple files; thus, scan density should not be considered when examining indexes that span multiple files’ . I was using Logical Scan Fragmentation only to decide whether DBCC DBREINDEX or DBCC INDEXDEFRAG has to be executed. As I saidbefore, I ingored all tables with number of pages less than 1000. My thresholds for Logical Scan Fragmentation are:
Logical Scan Fragmentation <20 - do nothing
Logical Scan Fragmentation >=20 AND Logical Scan Fragmentation <=50 - run DBCC INDEXDEFRAG
Logical Scan Fragmentation >50 - run DBCC DBREINDEX
We picked the thresholds based on our database(s). My index processing job includes prioritization of the actions (to control which indexes are processed first), duration control (we define execution window and check it every time either defrag or reindex finished to make sure we finish job before pick time hours), archiving of information for analysis (for example, if we have only 30 min left to complete the job and next index to be process is Ind_abc, but previous execution(s) took longer than 30 min, we skip this index, etc.).
Igor
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply