With apologies to Timex, I just wanted to relate a minor incident that happened with one of my mirrored databases last night. This database needed to grow its primary data file, and the autogrow amount was set to 5GB ( I have Windows Instant File Initialization enabled). Unfortunately, there was less than 5GB of available disk space on the drive array on the mirror where this data file lived. I know, bad DBA…
The data file grew on the principal, but when database mirroring sent the file growth command to the mirror side, it was not able to complete the operation, and mirroring was suspended for that database. I first noticed this when I saw that my log reuse wait description for that database was DATABASE_MIRRORING, and my transaction log was 71% full on the principal. I could also see that I had about 47GB of unsent log on the principal.
The first thing I tried was resuming the mirror by issuing this command:
ALTER DATABASE ngfulltext1 SET PARTNER RESUME;
This had no effect, so then I looked in the SQL Error Log on the Principal, and saw this error:
TCP://SQLMIRROR01.xxx.xxxxxx.com:5022', the remote mirroring partner for database 'ngfulltext1', encountered error 5149, status 3, severity 16. Database mirroring has been suspended. Resolve the error on the remote server and resume mirroring, or remove mirroring and re-establish the mirror server instance.
Next, I looked in the SQL Error Log on the instance where the mirror was located, and saw these two errors:
MODIFY FILE encountered operating system error 112(There is not enough space on the disk.) while attempting to expand the physical file 'P:\SQLData\ngfulltext1.mdf'.
P:\SQLData\ngfulltext1.mdf: Operating system error 112(There is not enough space on the disk.) encountered.
So duh, I was out of disk space on the P: drive. Again, dumb DBA, but luckily this was easy to fix by deleting some old backup files. After I did this, I resumed the mirroring session, which worked this time. SQL Server started rapidly sending the unsent log from the principal to the mirror (at about 58MB/sec).
The way database mirroring behaves in this situation is that it sends a pretty decent chunk of the unsent log to mirror and then stops the send. It can generally send the log much faster than it can be restored (depending on the I/O capacity of the mirror), so as the unrestored log grows to a certain size, it stops sending the unsent log for a few minutes, and waits for the mirror to “digest” that chunk (by restoring the log that has been sent over). As it works the unrestored log down to zero, it will start sending the unsent log again. This cycle continues until all of the unsent log as been sent to the mirror.
It ended up taking about 90 minutes to send and restore the entire unsent log and get the databases synchronized again. I was once again impressed that SQL Server 2008 database mirroring survived this abuse with very little complaint, and no downtime.
Situation at 8:56AM:
Situation at 9:41AM:
Network activity during one of the unsent log log send periods: