October 9, 2007 at 9:34 am
I have a job set up that runs the following command twice (other tasks,:
DBCC SHOWCONTIG WITH TABLERESULTS, ALL_INDEXES
The first time it ran, it completed in about 7 minutes. Then, it ran for over 4 hours when it was executed about 20 minutes later. Steps between include, defrag index, update stats, and shrinking and backing up the trans log.
The very longest it has run in the past is 52 minutes (looking back a previous runs of the same job).
The only file growth that occurred during this task was 400MB in the first 5 minutes of the task on the database log file. (None occurred on database file, or tempdb)
Some of our servers that are connected to this database had problems this morning as well. I believe it was related to this. They are reporting connection database error - reaching resource limits etc.
Anyone know what may have cause this process to take so long?
(Note that I normally only run this process once a week, but am doing some testing.)
October 10, 2007 at 10:01 am
Does it take a really long time to do a showcontig on just one particular table (pick a small table) What about the largest table. Run
SELECT TOP 10 object_name(id) ,rowcnt, rows,dpages * 8
FROM sysindexes
WHERE indid IN (1,0)
AND OBJECTPROPERTY(id, 'IsUserTable') = 1
ORDER BY rowcnt DESC
to show your 10 largest tables by row. Is the time to run a showcontig on one of these really long? Is the system being heavily used during these showcontigs?
Francis
October 10, 2007 at 3:20 pm
Francis,
Thanks for responding. It takes less than a second for a smaller table, and about 18 seconds for the largest table. This process was executed at 4:22am - not a lot of traffic, and there were no other jobs that I know of that should have affected it.
October 11, 2007 at 7:11 am
It doesn't seem as if this command should take 4 hours with the kind of response you describe. Are you sure its this command that getting hung up? Can you put some Select getdate() statements in your code to help confirm where the slowdown is? Profiler might help as well.
Francis
October 11, 2007 at 8:38 am
Francis,
You're right, this command should not have taken over 4 hours. It has finished in no more than 30 minutes in the past. Something occurred during that time that forced it to take so long -- and, that's what I've been trying to figure out so that I could avoid it in the future. I am sure that it was the command that was taking so long because of the timings displayed under view job history.
The code for the step that got hung is as follows:
DBCC SHOWCONTIG WITH TABLERESULTS, ALL_INDEXES
I don't believe there is a way to add a way to determine the time within this code. I am however able to see the time from the job history.
I will plan on running profiler during this job the next time it runs to make sure it does not get hung up. Good suggestion. 🙂
The only difference I can see from the previous jobs is that the step right before this step was supposed to shrink the log file. It said it succeeded, but the shrink did not occur. I don't think this would affect the time of DBCC SHOWCONTIG... but, maybe I am mistaken.
October 11, 2007 at 8:48 am
If the transaction log doesn't shrink and you cannot make it, do a search on 'SHrink log' on this site. There are a couple of scripts posted that I have used to force the log to shrink. Althought I didn't think SHOWCONTIG needed log space. DBREINDEX does. Curious problem. good luck.
Francis
October 12, 2007 at 1:02 pm
Well I would suggesst not to shrink the log as it will grow again when trasactions will happen. Try removing the dbcc shrinkfile portion from the job and simply run the dbcc showcontig.
October 12, 2007 at 1:12 pm
Good point Arun. If there is some unusual event that causes the log to grow then shrinking it may be of value. However, if some daily(or weekly) process increases the log then you would be better to leave it as it will likely just grow again. Growing an shrinking often could lead to a fragmented physical log file which might contribute to performance issues notwithstanding the actual time delay in growing (or shrinking the file)
Francis
October 12, 2007 at 1:58 pm
I tend to use the FAST option. Although it doesn't give me as much detail as not using it, I get all the information I need to determine whether an index needs to be rebuilt or not. And it's much faster and less resource-intensive than not using it.
Example:
DBCC SHOWCONTIG (tablename) WITH TABLERESULTS, ALL_INDEXES, FAST
Aaron
October 12, 2007 at 2:16 pm
Thanks All. I have been comptemplating using the FAST option. I'll make sure it give me all the info I need and give it a shot is so. Thanks for the suggestion.
I have also been thinking about changing the recovery mode from full to bulk-logged before I perform the maint on the indexes. This should reduce the size of the transaction log. Any warning against this?
October 12, 2007 at 6:24 pm
Changing to the bulk-logged recovery model will reduce the amount of tran log used if you use DBCC DBREINDEX, but not if you use DBCC INDEXDEFRAG. Be aware that although it will reduce the amount of log space used, your transaction log backups will be quite large since the changed blocks themselves are written to the tran log backup. Here is the process Microsoft recommends you follow when switching between these recovery models:
Before switching to the bulk-logged recovery model, you back up the log.
This is important because, under the bulk-logged recovery model, if the database fails, backing up the log for bulk operations requires access to the data.
After performing the bulk operations, you immediately switch back to full recovery mode.
After switching back to the full recovery model, you back up the log again.
Using bulk-logged recovery model you lose the ability to perform point-in-time recovery. You can restore individual log files, but you cannot restore to a point-in-time that is in the middle of one of those log files until you switch the database back to the full recovery model and backup the tran log. You can then restore future tran log files to a point-in-time.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply