July 29, 2003 at 6:03 am
I have issued the 'backup log dbname with no_log' statement while a transaction is filling up the log. It did nothing. The log is still growing. Any idea? Thanks.
July 29, 2003 at 6:14 am
The backup will do nothing, otherwise transactional integrity will be compromised. You've got to either kill the transaction or increase the log maximum and wait it out.
Cheers,
- Mark
Cheers,
- Mark
July 29, 2003 at 7:04 am
Remember, WITH TRUNCATE_ONLY or NO_LOG only truncates the INACTIVE portion of the log. So, if you have an active transaction nothing will happen (as Mark pointed out).
-SQLBill
July 29, 2003 at 7:05 am
Thank you for your help, Mark. I increased the transaction log and am waiting for the transaction to end. Is there a way to tell how much the transaction has finished and how much left? Thank you.
July 29, 2003 at 3:07 pm
Obviously too late now.... but I often put some sort of logging in really long transactions or stored procs. Logging can be either to the sql error log (raiserror ... 0, 1... with log) or maybe updating a table with the latest status every now and then.
If you know what the transaction's doing you can maybe do some dirty (nolock) reads from another session to see how much it's processed.
For the future, if at all possible, break the big transaction into repeated small ones. Eg. instead of processing 10 million rows in 1 trans, do 10,000 at a time, committing after each "batch".
Cheers,
- Mark
Cheers,
- Mark
July 30, 2003 at 2:33 pm
The transaction finished in 10 hours. Is that normal for deleting 14 million rows?
How do you use (raiserror ... 0, 1... with log) and dirty (unlock) reads to show the progress of the execusion?
Thank you, Mark.
July 30, 2003 at 3:02 pm
Some suggestions:
1) if you're using 1 statement to delete all 14 million rows, try to avoid doing so. Try a series of deletes that delete batches.
2) if you're using a cursor or loop process, try to keep a count of how many you've deleted, and every now and then. Eg.
set @count = @count + 1
if @count % 10000 = 0 raiserror('%d rows processed so far',0,1,@count) with nowait
3) if the 14 million constitute the bulk of the table's rows, consider options like: a) export to-be-retained rows to another table, truncate original, import saved rows; or b) rename table, create new table, import good rows from original, drop original
4) try to be more regular with deletes/archiving to spread your load. 10 hours sounds like it cannot avoid clashing with other activities.
Cheers,
- Mark
Cheers,
- Mark
July 30, 2003 at 3:11 pm
Thanks, Mark, for the suggestions. All the data is in one table. In the method of deleting rows, will index on the table be a factor of response time?
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply