May 2, 2016 at 5:00 am
Okay, so I've been in this job 10 years and I finally came across my first bad backup. Last night's differential won't restore. We're getting the below error message:
Msg 3203, Sev 16, State 1, Line 1 : Read on "\\BackupDrive\Path\Database\20160501\MyDB_DIFF_20160501_1900.BAK" failed: 13(The data is invalid.) [SQLSTATE 42000]
So as we're making a copy-only backup (FULL) to see if that works, I will run DBCC CHECKDB to check the database itself for problems. But before I do, I'm wondering if there's anything else I need to check.
Any thoughts?
May 2, 2016 at 6:45 am
First thing I would do would be running
DBCC CHECKDB ('MyDB') WITH NO_INFOMSGS ALL_ERRORMSGS;
😎
I find the SQLSTATE 42000 slightly odd, IIRC that's a "Syntax error or access violation".
May 2, 2016 at 6:53 am
Eirikur Eiriksson (5/2/2016)
First thing I would do would be running
DBCC CHECKDB ('MyDB') WITH NO_INFOMSGS ALL_ERRORMSGS;
😎
Just finished that. It yielded nothing but "commands completed successfully" which tells me there are no error messages.
I find the SQLSTATE 42000 slightly odd, IIRC that's a "Syntax error or access violation".
I'm hoping that the problem is one of those "the network went funky during the backup" issues. Corporate backs up all databases to a UNC share, which requires network access. If this is the case, then I'm less concerned about one errant backup issue, so long as future backups work. Considering all databases back up to the same UNC share, just different folders, and the others restored this morning without error, I believe that a network error could just be momentary and not affect the future backups.
Of course, it could be a bad sector on the destination drive as well...
So many possibilities for what the problem could be. But as I continue to check DBCC CHECKDB (physical, logical, data purity), I'm thinking that it's not a database corruption issue.
May 2, 2016 at 7:29 am
This happened to me once, just as you describe but with a log backup. Another DBA accidentally deleted 50,000 rows unintentionally during business and we had to do a to-the-side restore to point in time to try and recover. However, when we hit 7:30 AM we hit a brick wall as we had a bad file just as you described. We too were backing up over the network. File was fine from HEADERONLY and VERIFYONLY. We called MS to look at the situation and they said, sorry, but even though it looks fine, the backup is damaged and we cannot fix it. So that data was pretty much lost.
We never saw that problem again. It was a one-shot one time problem. If I were you, I might try some proactive restores of all backups for a time to verify that there is no systematic problem, but if not then I think you can safely consider it a one time glitch.
May 2, 2016 at 7:33 am
jeff.mason (5/2/2016)
If I were you, I might try some proactive restores of all backups for a time to verify that there is no systematic problem, but if not then I think you can safely consider it a one time glitch.
Actually we already do that on a daily basis, which is how we discovered this backup was bad. But thank you for the suggestion. @=)
May 2, 2016 at 7:38 am
Brandie Tarvin (5/2/2016)
jeff.mason (5/2/2016)
If I were you, I might try some proactive restores of all backups for a time to verify that there is no systematic problem, but if not then I think you can safely consider it a one time glitch.Actually we already do that on a daily basis, which is how we discovered this backup was bad. But thank you for the suggestion. @=)
Well, in my opinion, if you already do that and have only seen the one bad backup, you already have evidence it's a one-shot problem. I think you can breathe easy.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply