April 10, 2013 at 7:51 am
Shouldn't cross terms I suppose 🙂
I immediately saw a blocking chain on the master database. I found it to be dangerous, because I really wanted to troubleshoot it...
In this case though users are only affected while the rebuild is occurring so I just need to be patient and wait for the next scheduled run.
I don't have enough privileges to restore a backup to another location, and our primary DBAs are currently over booked.
Thanks again,
Dane
April 10, 2013 at 8:05 am
Are you sure this is not a problem with your rebuild script or stored procedure?
What happens if you just run an index rebuild from a query window like this?
ALTER INDEX [PK_item] on [dbo].[item] REBUILD
April 10, 2013 at 8:08 am
At this point I am only running the individual rebuild. With, or with out, (online=on) I get the same results.
Thanks again!
-Dane
April 10, 2013 at 8:43 am
curious question - how much memory do you have on the machine.
April 10, 2013 at 10:02 am
Here is what I see in sys.dm_os_sys_info:
physical_memory_in_bytes
68718006272
So about 64GBs.
April 11, 2013 at 10:21 am
I was able run the rebuild this morning after running a:
DBCC UPDATEUSAGE( db, 'item' ) WITH COUNT_ROWS, NO_INFOMSGS
The rebuild completed under a second...an issue though is that I didn't try it again before running this....so this may of resolved the issue, but I can't be sure. Come this weekend I will see if the issue comes up again.
Wish I would of tested it before running the UPDATEUSAGE...
Thanks again for all the help and I will post updates if/when something new happens.
-Dane
April 15, 2013 at 8:24 am
So the update usage seems to have corrected the issue.
Can someone give any additional insight as to why that would of corrected this type of problem?
Thanks for all the help,
Dane
April 15, 2013 at 8:32 am
The only time I have ever needed to use updateusage was when I had a negative page allocation. It is possible SqlServer had an incorrect value for that table or index you were trying to work on. Did you see any checkdb warnings or failures for a negative value in page allocation?
April 15, 2013 at 8:42 am
No I check the results of CHECKDB every week and haven't seen anything flagged as an issue. I will say that this DB was converted from 2000, and I had done some reading that on 2000 it was more common to have issues where you needed to run UPDATEUSAGE.
Thanks again,
Dane
April 15, 2013 at 9:54 am
Given that this was a 2000 DB and UPDATEUSAGE had not been run, it compels me to ask you if you have ever run CHECKDB WITH DATA_PURITY. If you have not, and you can find a blog by Paul Randal to learn how to check, then you might want to do that the next time you run CHECKDB.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
April 15, 2013 at 10:10 am
opc.three (4/15/2013)
Given that this was a 2000 DB and UPDATEUSAGE had not been run, it compels me to ask you if you have ever run CHECKDB WITH DATA_PURITY. If you have not, and you can find a blog by Paul Randal to learn how to check, then you might want to do that the next time you run CHECKDB.
If the op is not sure checkdb can be run with the data_purity clause and it will run for ever more with data_purity if the results come back clean.
---------------------------------------------------------------------
April 15, 2013 at 10:34 am
That is great information. I will do some digging and look into running that process this weekend.
Thanks for all the help,
Dane
April 17, 2013 at 2:45 pm
So I ran:
DBCC CHECKDB() WITH DATA_PURITY
On a recently restored test DB and it ran with no errors. Should I go ahead and schedule a production run? It took about 2-3 times longer to run then the typical CHECKDB (didn't notice any blocking or resource contention), but I can't say if that is because of the DATA_PURITY check or if it is the server/usage difference.
I reread Paul's post a few times, but not sure what exactly the gains are to now run it on the prod DB...if I check it on a restore and it comes back clean...2005+ won't allow the insertion of bad data correct?
Would it still be worth running it on prod so that future checks would automatically check for data purity?
Thanks again for all the help,
Dane
April 17, 2013 at 3:08 pm
dkschill (4/17/2013)
So I ran:
DBCC CHECKDB() WITH DATA_PURITY
On a recently restored test DB and it ran with no errors. Should I go ahead and schedule a production run? It took about 2-3 times longer to run then the typical CHECKDB (didn't notice any blocking or resource contention), but I can't say if that is because of the DATA_PURITY check or if it is the server/usage difference.
I reread Paul's post a few times, but not sure what exactly the gains are to now run it on the prod DB...if I check it on a restore and it comes back clean...2005+ won't allow the insertion of bad data correct?
Would it still be worth running it on prod so that future checks would automatically check for data purity?
Thanks again for all the help,
Dane
probably down to server spec differences.
defiitely run with data_purity against the actual prod database.
---------------------------------------------------------------------
April 17, 2013 at 3:20 pm
Definitely run in prod. Chances are good if it ran clean off the restored copy it will run clean in prod. CHECKDB is not going to make mods to your database by default but of course run it during low usage per the usual approach since it is a resource-hog.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
Viewing 15 posts - 16 through 30 (of 34 total)
You must be logged in to reply to this topic. Login to reply