April 13, 2006 at 12:53 pm
1) have you forced a reindex since removing 50 million rows.
2) Have you tried updating the statistics via either of the previous mentions?
Also, does the table have a clustered index on it or is it a heap (no clustered index at all just in case).
The more I think about it the data being removed most likely created a huge number of ghost objects and when trying to scan the database it is stumbling on those (lots of locks created for sliding thru the data). This would make sense especially since NOLOCK hint has such a drastic effect. I know the table is large but I think you might even have to use DBCC UPDATEUSAGE just as a maintence item. Here is what I suspect is the reason for you issue. As the data is retrieved the internal marker is starting on the first referenced page 1 (which if the 50 million rows are at the start) will be empty but it will look at all of the ghost records until it seems any that are not. Doing so you may actually read the 50 million deleted rows which sill exist but are marked as ghosts. This would explain the delay. You possibly can even see this with the NOLOCK hint as it might actually read from the ghost set since it can cause dirty reads. This is my best guess at it. And I my suggested solution is maintence needs to be performed to cleanup and remove the ghost records via reindex and updatestats.
April 13, 2006 at 1:30 pm
We run DBCC UPDATEUSAGE every week. I updated the statistics using "update statistics tableX" command.
I have not done the reindex after deleting 50 million rows. This table is huge and I cannot run dbcc reindex becuase that table is used by users throught the day except for 2 to 4 hrs. Reindex will take almost 1 to 2 days. I tried running DBCC defrag, it ran for almost 2 days and completed only 50% of the defrag phase.
This table has 1 clustered and 1 non-clustered index.
I will try running "update statistics tableX with fullscan" and see what happens.
Thanks for all the suggestions.
April 13, 2006 at 1:40 pm
Please, let us know what your results is.
April 13, 2006 at 2:03 pm
Got to thinking even more about how NOLOCK reacts to things in the system. How big is your Transaction Log currently and when was the last time you did a database backup?
April 13, 2006 at 2:23 pm
try
DBCC dbreindex(table),
this might take some time and table will be locked...
other option to defragment the index...
some how i think it related to fragmented index and/or pages
Borik
April 13, 2006 at 2:23 pm
We take full backup every 2 weeks and differential daily. Last successfull diiff backup was day before yesterday and the transaction log size is 70GB of which only 600MB used so far.
April 14, 2006 at 10:11 am
I ran updstats with fullscan, now the query takes 1 minute to complete. It's still bad. I will try running dbcc dbreindex.
April 20, 2006 at 4:17 am
Since NOLOCK does not apply any locks the time saved is from the Lock acquired, Lock released. It does run the risk of dirty reads if there are any transactions tacking place on the table however.
-------------------------------------------------------------------------
Normal chaos will be resumed as soon as possible. :crazy:
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply