April 30, 2004 at 11:37 am
Hi Everybody!
I'm running a DTS package and it's failing constantly. The error message is "The log file for database "VVV" is full. Back up the transcation log for the database to free up some log space"
The log file is set up at unrestricted grow and there are 30 G free on the harddisk. The log file itself was under 1 M. I increased the file to 100 M and I receive the same error. What is the solution?
Thank you
April 30, 2004 at 1:28 pm
Are you growing the log file by percentage or MB? What size are you growing it by?
This can happen if the log file can't grow fast enough for the incoming transactions.
Another thing, how big is the data that you are importing? If you are bringing in 5GB of data, not only do you need the space for 5GB of data but you need the space for 5GB of transaction log. Remember, the TL keeps track of all transactions in case it needs to roll them back, etc....the DTS you are doing is a transaction.
-SQLBill
May 3, 2004 at 6:16 am
The data file is 750 M. The log file is 4 M.
The transaction file growth is 1500 M. (I tried with 500 M in the beginning, it didn't worked).
There are 30 G free on the HDD.
Should I choose on file growth "By percentage?"
What is the problem with this package?
Thx
May 3, 2004 at 10:28 am
I've run into this same problem in the past. You might consider truncating the log on Checkpoint. Try:
sp_dboption 'database_name', 'trunc. log on chkpt.', 'true'
May 12, 2004 at 10:59 am
Is your hard drive partitioned?
you could be running out of space in the partition and need to move the log to another partition.
The transaction log has to keep track of every change made and how to undo it in case of a rollback. If you are doing lots of changes at one time (bulk inserts/updates), the log could be growing faster than the 1500 M allows...or it could be failing before it grows. For example...the log is 4 M with 1 M of that as free space (3M used/1M available). You do a large transaction that needs 40M right now (like a bulk insert/update). The transaction will fill up that 1M of free space real quick and need to grow, but it might not be able to allocate the extra space quickly enough for the job.
One solution is figure out how big your log needs to grow for this one job and make the log that big to begin with. Then leave it that size.
-SQLBill
May 17, 2004 at 7:54 am
try this:
- set db option 'trunc log on checkpoint' to true
- in the dts package options, you can set the 'insert batch size' to xxx number of rows.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply