October 25, 2021 at 5:17 pm
Hello,
I have recently inherited a SQL 2012 database (no backups available), running in SQL 2005 compatibility mode with Filestream enabled. The first error, I received was:
Error: 3043, Severity: 16, State: 1.
BACKUP 'mcs_mia' detected an error on page (3:1484827) in file 'e:\BLOB\mcs_mia.ndf'.
When I ran DBCC check, I received several of these errors:
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 8315129649644765184 (type Unknown), page (3:1484800). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -10.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 4995698289929551872 (type Unknown), page (20047:1230258510). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -10.
Msg 8909, Level 16, State 1, Line 1
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 8030592632002772992 (type Unknown), page ID (3:1484810) contains an incorrect page ID in its page header. The PageId in the page header = (25455:1917871468).
Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 7959390435227729920 (type Unknown), page (27753:1713398894). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -10.
CHECKTABLE found 0 allocation errors and 32 consistency errors not associated with any single object.
Msg 8965, Level 16, State 1, Line 1
Msg 8928, Level 16, State 1, Line 1
Object ID 1265244108, index ID 1, partition ID 72057615748366336, alloc unit ID 72057597274292224 (type LOB data): Page (3:1484800) could not be processed. See other errors for details.
Msg 8928, Level 16, State 1, Line 1
Object ID 1265244108, index ID 1, partition ID 72057615748366336, alloc unit ID 72057597274292224 (type LOB data): Page (3:1484801) could not be processed. See other errors for details.
Msg 8965, Level 16, State 1, Line 1
Table error: Object ID 1265244108, index ID 1, partition ID 72057615748366336, alloc unit ID 72057597274292224 (type LOB data). The off-row data node at page (3:1484805), slot 0, text ID 1164836864 is referenced by page (3:1484373), slot 0, but was not seen in the scan.
Msg 8965, Level 16, State 1, Line 1
Table error: Object ID 1265244108, index ID 1, partition ID 72057615748366336, alloc unit ID 72057597274292224 (type LOB data). The off-row data node at page (3:1484808), slot 0, text ID 1164836864 is referenced by page (3:1484373), slot 0, but was not seen in the scan.
There are 113008 rows in 9958 pages for object "EST_REQUEST_FILES".
CHECKTABLE found 0 allocation errors and 65 consistency errors in table 'EST_REQUEST_FILES' (object ID 1265244108).
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKTABLE (mcs_mia.dbo.EST_REQUEST_FILES).
I did run, but it did not fix the problem:
ALTER INDEX [PK_EST_REQUEST_FILES] ON [dbo].[EST_REQUEST_FILES] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90)
GO
GO
ALTER INDEX [ui_RequestFilepdfIndex] ON [dbo].[EST_REQUEST_FILES] REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90)
GO
What can I do to fix this? Looks like it is just 1 table. I do not have a backup. I am attaching the dbcc check for that 1 table.
October 26, 2021 at 11:00 am
Check the comments at SQLSkills article CHECKDB From Every Angle: Using DBCC PAGE to find what repair will delete
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
October 26, 2021 at 2:41 pm
Try to see if you can export the data, not back it up. Chances are you won't be able to, but try. And, if you get an error, read it carefully. Maybe it will suggest that part of the data can be exported. Other than that, you're kind of in trouble.
Backups are really handy things to have. It's really horrible when you don't have them.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 5, 2021 at 5:54 pm
Thank you everyone. I was able to restore the corrupted database on another database server (used continue_after_error in the restore statement). I tried everything...export the data - which I couldn't do, rebuild indexes, drop/create indexes (except couldn't drop/create the primary key because it was part of a full text key & clustered). So, I had to issue this statement: I did try rebuild_repair first, but it didn't fix the problem.
Begin tran
dbcc checktable (‘dbo.est_request_files’) REPAIR_ALLOW_DATA_LOSS
rollback tran
--commit tran if everything looks good
ALTER DATABASE mcs_mia SET SINGLE_USER WITH ROLLBACK IMMEDIATE
After I ran this on the restored database, I compared the table from the production database to the restored database and found that only record would be affected. I told the client and he accepted that.
November 5, 2021 at 6:32 pm
You should play the lottery tonight. 😀 You just proved that you're one of the luckiest people on the whole planet. 😀
With that (and I know you get it but I have to say it out loud), make sure the database is in the FULL Recovery Model, take a full backup, and get the log file chain started with log file backups. Then schedule the backups to run using a good plan considering RTO and RPO. You should also start running at least DBCC CHECKDB WITH PHYSICAL_ONLY and NOT delete any backups until you have proof that you don't have any corruption.
I also regularly check and report on the presence of rows in the msdb.dbo.suspect_pages table as an "early warning". The table can be updated when queries try to read from a bad page.
I haven't done so yet but I'm thinking of including a step to record the number of rows in the table after every backup (especially in my log file backup proc because it's used once every 15 minutes happen often). That will help me determine when corruption happened in a lot of cases and will be impervious to the "roll-off" of rows that msdb.dbo.suspect_pages is subject to (can only hold 1000 rows).
--Jeff Moden
Change is inevitable... Change for the better is not.
November 8, 2021 at 2:00 pm
Yeah, go get that lottery ticket now.
Happy to hear it all worked out. As Jeff says, get going on those backups. In addition to all his great info, let add one more.
You should test your backups. This article I wrote is old at this point, but the information in it is as valid as ever. Last thing you want to do is set up backups and then find that they're broken right when you need them most.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply