December 1, 2009 at 7:19 am
atramard (12/1/2009)
Ie you get a Disk out of space allert what can you do to sort the server out once youve:- shrink any file you can
- move DB's if you can to other drives...
Well, I wouldn't shrink any databases at all. The effects on them would be nasty.
If the log has filled the drive, the first thing to check is the % used. This is because, when the log fills up, active transactions will immediately be rolled back. It may be that by the time you get to the server, the log is large, but not full. If that's the case, do a once-off shrink of the log file.
If the log is still full, check the log_reuse_wait_desc in sys.databases, see what's preventing the log space from being reused and take appropriate action.
If the log's too full to allow that, I would detach that database (not any other), move the log file to a drive that has more free space (even an external drive works), take whatever action is needed to reused the space in the log, then do a once-off shrink of the log file, detach the DB again, move the log file back to where it should be and reattach the database.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 1, 2009 at 7:21 am
dmoldovan (11/30/2009)
Also if you do bulk data import / inserts, change the recovery model to "bulk logged" or "simple". BOL will guide you through the steps you need to take in this case.
Changing the database to simple recovery model would not work, (in this case) when a single transaction is trying to delete large no of values.
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
December 1, 2009 at 7:28 am
atramard (12/1/2009)I have put somme dummy file 1Gb that i can delete to release space (Last resort), but that may not work as the log file could still grow, and caping the log file would not sort the issue...
By doing this, you are making the available space to the log file even less.
Regarding your question, what should you do in such a scenario.
1. Check that the delete job runs at regular intervals from no on..
2. If the delete job is run later with large no of records, and again the problem of log file growing, add another log file on a different drive, to be safer and let the transaction (delete large # of rows) succeeds. Which may be any time, so keep an eye on your disk space alerts and be ready when the issue comes...
Which one would you like to choose.
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
December 1, 2009 at 7:50 am
Bru Medishetty (12/1/2009)
dmoldovan (11/30/2009)
Also if you do bulk data import / inserts, change the recovery model to "bulk logged" or "simple". BOL will guide you through the steps you need to take in this case.Changing the database to simple recovery model would not work, (in this case) when a single transaction is trying to delete large no of values.
Adding to what Bru has stated, if you read one of the subsequent posts by the OP, the database is already using the SIMPLE recovery model.
December 1, 2009 at 12:44 pm
Bru Medishetty (12/1/2009)
dmoldovan (11/30/2009)
Also if you do bulk data import / inserts, change the recovery model to "bulk logged" or "simple". BOL will guide you through the steps you need to take in this case.Changing the database to simple recovery model would not work, (in this case) when a single transaction is trying to delete large no of values.
Bru, in this case - no.
Please note the "if"...
December 1, 2009 at 12:54 pm
dmoldovan (12/1/2009)
Bru Medishetty (12/1/2009)
dmoldovan (11/30/2009)
Also if you do bulk data import / inserts, change the recovery model to "bulk logged" or "simple". BOL will guide you through the steps you need to take in this case.Changing the database to simple recovery model would not work, (in this case) when a single transaction is trying to delete large no of values.
Bru, in this case - no.
Please note the "if"...
I did see the "if",
but still changing into Simple or Bulk Logged Recovery model would not ensure the Log File from not becoming huge.. It will always grow as large as the largest transaction performed.
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
December 2, 2009 at 1:53 am
So guys basicaly If your transaction log is as big as your disk allowed and the trasaction is not completed, there is not lot you can do as:
- you can't detache the DB (As this is will wait for the transaction to compete)
- you can't rollback as there is no extra log file space.
If you can't add extra disk space, the only thing you can do is to kill that DB to preserve the other Db that are on the same server?
December 2, 2009 at 5:45 am
atramard (12/2/2009)
So guys basicaly If your transaction log is as big as your disk allowed and the trasaction is not completed, there is not lot you can do as:- you can't detache the DB (As this is will wait for the transaction to compete)
- you can't rollback as there is no extra log file space.
If you can't add extra disk space, the only thing you can do is to kill that DB to preserve the other Db that are on the same server?
Not at all. You have to modify the procedure to delete records in smaller batches (transactions). That is why I asked if you could post the code and why I also pointed you to my article on deleting large number of records.
You said the existing procedure worked fine in production when run on a regular basis. That apparently wasn't the case in development. It is obvious that there are many records to be deleted in development, and that they can't be deleted in a single transaction without exhausting your available disk space.
December 2, 2009 at 5:52 am
atramard (12/2/2009)
So guys basicaly If your transaction log is as big as your disk allowed and the trasaction is not completed, there is not lot you can do as:- you can't detache the DB (As this is will wait for the transaction to compete)
- you can't rollback as there is no extra log file space.
I take it that you did not read my post on the previous page?
GilaMonster (12/1/2009)
atramard (12/1/2009)
Ie you get a Disk out of space allert what can you do to sort the server out once youve:- shrink any file you can
- move DB's if you can to other drives...
Well, I wouldn't shrink any databases at all. The effects on them would be nasty.
If the log has filled the drive, the first thing to check is the % used. This is because, when the log fills up, actiave transactions will immediately be rolled back. It may be that by the time you get to the server, the log is large, but not full. If that's the case, do a once-off shrink of the log file.
If the log is still full, check the log_reuse_wait_desc in sys.databases, see what's preventing the log space from being reused and take appropriate action.
If the log's too full to allow that, I would detach that database (not any other), move the log file to a drive that has more free space (even an external drive works), take whatever action is needed to reused the space in the log, then do a once-off shrink of the log file, detach the DB again, move the log file back to where it should be and reattach the database.
If a transaction is in progress and the log fills up, the transaction will FAIL and be automatically rolled back. SQL's smart enough to reserve space in the log as it processes a transaction to ensure that it can always roll back (because if it ever got into a state where is couldn't, the DB would have to be marked as suspect.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 2, 2009 at 9:30 am
Gail
Are you sure this acctualy occur as my log file grew from ~500Mb to over 70Gb (in 3 h) and then another 4h later it had still not rolled back.
you think i should have let it ran longer and it would have sorted itself.... not too sure about that.
Also this would imply that durring that time if any other Db need to grow they wont be able to (unless moved....)
-- Also I did try to detached the DB at the time to move the log to a bigger drive but i was locked by the delete process. so the detached would ony occure once the delete transaction will have ended.
December 2, 2009 at 9:44 am
I have aborted a transaction in the past that had been running for over an hour, and it took longer to rollback, so yes, it is possible that had you given it more time it may have completed the rollback.
Back to the problem of deleting records. Can you post the table defs and the code? Have you read the article I suggested?
December 2, 2009 at 12:21 pm
atramard (12/2/2009)
Are you sure this acctualy occur as my log file grew from ~500Mb to over 70Gb (in 3 h) and then another 4h later it had still not rolled back.
Absolutely sure. But don't take my word for it, from Books Online
Transaction Log Logical Architecture
Log records for data modifications record either the logical operation performed or they record the before and after images of the modified data. The before image is a copy of the data before the operation is performed; the after image is a copy of the data after the operation has been performed.
Rollback operations are also logged. Each transaction reserves space on the transaction log to make sure that enough log space exists to support a rollback that is caused by either an explicit rollback statement or if an error is encountered. The amount of space reserved depends on the operations performed in the transaction, but generally is equal to the amount of space used to log each operation. This reserved space is freed when the transaction is completed.
you think i should have let it ran longer and it would have sorted itself.... not too sure about that.
It would have, eventually. How long is another matter. Rollbacks tend to take longer than the modifications that they are rolling back.
Of course, no other data modifications could occur, as the log is full.
Also this would imply that durring that time if any other Db need to grow they wont be able to (unless moved....)
Correct.
-- Also I did try to detached the DB at the time to move the log to a bigger drive but i was locked by the delete process. so the detached would ony occure once the delete transaction will have ended.
True, all connections have to be terminated to detach a DB and a connection can't be terminated while it's rolling back (for good reason)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 3, 2009 at 2:25 am
Thank you all,
Thanks gail for your answer.
No that youve pointed it out, I had (some long time ago) a log file that was capped is size and the transaction did roll back once it was full, but it was a couple of Mb not 70Gb.
it look like i am gone have to limit those log growth.... yepee lucky me 300 db to review....
Thanks again
December 3, 2009 at 2:53 am
atramard (12/3/2009)
it look like i am gone have to limit those log growth....
Why?
What you should have is alerts that warn you when the disk is getting full (80% as a starting point) so that you can intervene before all current transactions roll back and before there's actually a problem. That, combined with checking code and ensuring that there isn't anything that should be able to cause long-running transactions are what you should be looking at, not limiting the log file so that errors occur earlier.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 3, 2009 at 5:43 am
As Gail Shaw has pointed out, by limiting your log file size you are asking for similar problems more often.
It is a good point, to revisit that code that caused the growth of log file and make sure it is going to do the same task in little chunks. I think this point has been reiterated multiple times by now.
Of course alerting is a best way to be prepared for the event.
Blog -- LearnSQLWithBru
Join on Facebook Page Facebook.comLearnSQLWithBru
Twitter -- BruMedishetty
Viewing 15 posts - 16 through 29 (of 29 total)
You must be logged in to reply to this topic. Login to reply