September 27, 2013 at 2:51 am
Hi,
I've recently implemented a new DBCC INTEGRITY check process running a combination of CHECKDB (for the smaller databases) and CHECKTABLE for larger - spread over a few days.
During my testing of the CHECKTABLE element, I've noticed that the CHECKTABLE does not always report the integrity errors. :w00t:
If I repeatably run the check, about 1 in 4 times it reports no errors at all, the other times it does - see below :-
ERRORS :-
ErrorLevelStateMessageText
8928161Object ID 2105058535, index ID 0, partition ID 72057594038779904, alloc unit ID 72057594039828480 (type In-row data): Page (1:79) could not be processed. See other errors for details.
89391698Table error: Object ID 2105058535, index ID 0, partition ID 72057594038779904, alloc unit ID 72057594039828480 (type In-row data), page (1:79). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 29493257 and -4.
2593101There are 911 rows in 11 pages for object "CorruptTable".
8990101CHECKTABLE found 0 allocation errors and 2 consistency errors in table 'CorruptTable' (object ID 2105058535).
No ERRORS :-
ErrorLevelStateMessageText
2593101There are 911 rows in 11 pages for object "CorruptTable".
I should add, for the test, I manually corrupted the "test" database by manually editing the MDF files on a particular value. This issue is occuring on SQL2008(SP2), on my SQL2012 instance it seems to behave itself as expected. I appreciate that this could because this is a forced corruption, but surely a currupt database is still a corrupt database by whatever means.
Has anyone else seen this behavour?
Thanks.
September 30, 2013 at 11:55 am
CheckDB does a CheckAlloc on the database, a CheckTable on each table and view, a checkcatalog. So I think there is a chance that checkTable won't catch problems that a full DBCC CHECKDB will. And that's obvious; a DBCC CHECKDB takes more time to complete because it's a more in-depth check.
I do not use CHECKTABLE when checking my databases, I always use DBCC CHECKDB. If my database is too big, I use PHYSICAL_ONLY flag. But still, I perform DBCC CHECKDB at some point.
Also, I use a Dev server for that, not my live production server. Unless you have just a few databases and they are small ones (below 50GB), running DBCC CHECKDB will take hours and will have a negative impact on I/O performance.
So, my 2 cents:
-Backups and restore your databases to a different server if possible (a non production one)
-Once restored, run a DBCC CHECKDB on all of them unless they are big.
-On big databases run a DBCC CHECKDB WITH PHYSICAL_ONLY but try to run a full DBCC CHECKDB at least one time every month.
Doing that you won't only be checking database integrity but the integrity of your backups as well. After all, you can only fix a data corruption problem (without data loss) if you have a good and recent backup. If you have not tested your backups and they fail, you pretty much are out of luck and it will be a very long day indeed.
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply