June 1, 2008 at 10:43 am
Hi!
We have a Microsoft SQL Server 2000 SP3 running database for Microsoft
Navision 3.7
From time we encounter problems, especially when running heavy query
procedures from Navision, with the transaction log. It's actually setup as
folows:
File properties:
Recovery model: simple
Settings: Autoupdate statistics, Auto create statistics, Autoshrink
Initial size: 100 Mb
File growth By percent (10%)
Restrict file growth (MB) 10000
OPTIONS:
Recovery model: simple
Settings: Autoupdate statistics, Auto create statistics
We get the following errors (once every 2-3 months so far):
The log file for database 'ME_Prod' is full. Back up the transaction log for
the database to free up some log space..
Now, I've set up a Transaction Log space allocated to 10 Gb with the growth options as follows:
Allow growth In megabytes: 100 Mb
Growth limit: 20Gb
Besides, I've deselected Autoshrink
Is this new setup better that we have had before? What is the best practice to setup a transaction log file size when in simple recovery model but with heavy transactions?
Also, how functions the transaction log? What happens, when there's no more room to write?
Thank you!
June 1, 2008 at 11:23 am
your new setup is better.
autoshrink is best left off because the constant growing and shrinking wastes resources and fragments the physical file.
If the tran log is in simple mode set it to a size that will handle the largest transaction and leave it at that. You can leave the autogrow option on (set to a MB value of 100 - 200MB) to give you some leeway, but just be aware of filling the whole drive and starving any other files on the drive of space.
when a tran log becomes full and cannot grow any more (i.e take another extent of the specified sixze) the database will stop processing all transactions until space is cleared.
What size is the database itself.? To take 10GB of log in simple mode you either have a transaction using a lot of log or a long running transaction, it would be worth trying to track this down. USe dbcc opentran when you see the log is very full (though there is a good chance it is the optimisation job doing this so check there first.)
---------------------------------------------------------------------
June 6, 2008 at 1:01 am
I still get the same error... How can I solve this? Why the transaction log get's full even if I allowed it to grow till 20 Gb and before we where running without such error even when limited to 10 Gb??? How do I set up this transaction log so that I won't get any more "transaction log is full" error???
Mu current setup is as follows:
File properties:
Recovery model: simple
Settings: Autoupdate statistics, Auto create statistics
Allocated space: 10 Gb
File growth By size 1000 Mb
Restrict file growth (MB) 20000
Current size - 20Gb
OPTIONS:
Recovery model: simple
Settings: Autoupdate statistics, Auto create statistics
What else can I do to resolve this?
June 6, 2008 at 1:46 am
Use the below DBCC command to see which are the Active VLF in Log file
and how many
USE DATABASENAME
GO
DBCC LOGINFO
Status 2 is active.
Also you can check is there any oldest transaction which is holding these VLF Active by putting MIN LSN in that VLF.
You cannot take Trans Log Backup and shrink it your expected size by putting a job which runs according to you choice coz your recovery model is simple.
Try to find out why the Log Space is not recycled.
Regards,
Raj
June 6, 2008 at 2:03 am
according to your first post you were getting these errors with log limited to 10GB. looks like for now if you have the disk space you will have to turn of the restrict file size option, or increase the max value again
Look into these Navison queries with a view to making the transactions smaller (batch the query up)
---------------------------------------------------------------------
June 6, 2008 at 5:27 am
The best way of doing this is to take the bkp of log file and shrink the existing one.
-Satya
June 6, 2008 at 5:55 am
as for queries running from Navision, I've noticed one, that was actually using DELETE instead of TRUNCATE. This is just one of those queries that where built before I came to the company.
As for backup... Does it make sense doing it in case of the Simple model?
June 6, 2008 at 6:32 am
Execute
DBCC OPENTRAN
Findout is there any Oldest Transaction left out .
After that execute
DBCC INPUTBUFFER(@SPID)
or
fn_get_sql(sql_handle)
To know which query is it?
Find out the status & cmd of that spid from sysprocesses table.
Check that is there any job for DB Reindex or Defrag Running every night.
Hope so it is helpful.
Regards,
Raj
June 6, 2008 at 8:08 am
If your database is set to "simple", you cannot backup the log. Are you running any maintenance jobs like rebuilding indexes? As far as delete versus truncate, is it purging all data from the table? If so, can you modify the query to use truncate? If not, what volume of data is being deleted? How large are the tables being affected? And as George stated, if you've reached your maximum file size, all processing stops until you cleanup the log or manually increase the size. Why not simply put the database into full recovery and setup a job to backup your logs on a regular basis? Simple mode disallows any point-in-time recovery options for you (if that is of any concern).
-- You can't be late until you show up.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply