June 10, 2004 at 12:25 pm
Hi,
I m having a DB on the web server with a hosting company..
The DB had been working fine and all of a sudden i get the Transaction Log is full error and
i checked the size of the Trans Log and its 17 MB that is the max size allocated by the hosting
guys.
I checked the size using the following methods.
sp_helpdb and here it says the size is 17408KB
and when using the command DBCC SQLPERF(LOGSPACE) it says
Logsize (Mb): 16.992 and Log Space Used (%): 12.40 .. why are the two stmnts showing
different sizes .. i.e in DBCC it says only around 12% has been used..
i am having a SP which is generating a Temp table within it writing some data to it and then
deleting the Temp table again. I dont have any Commit Trans and Rollback Trans stmnts in this
SP. Is the problem occuring bcos of this SP ? Do i need the commit / rollback trans stmnts..
The hosting company has not given me rights to run the DBCC SHRINKFILE and DBCC SHRINKDATABASE stmtns so i am unable to truncate the log.. is there any other way to truncate
the log..?
The DB is in simple recovery mode...
Thanx a lot for the help...
Regards
bTM
June 10, 2004 at 12:52 pm
COMMIT or anything else will not stop the transaction log from growing. The log keeps track of transactions that happened since the last backup of any kind. If you are backing up the transaction log, they can be applied to a full back to allow restoring to a "point in time".
If your db is in SIMPLE mode, I don't believe your log should be growing. But regardless, you want to shrink it. Run this command:
BACKUP LOG dbname
WITH TRUNCATE_ONLY
-SQLBill
June 10, 2004 at 12:59 pm
hi bill,
thnx for u help.. i tried that but the log size is still the same after running that
command ..
Regards
bTM
June 10, 2004 at 1:56 pm
Hi Bharat,
It seems that your database log is not really full, since it is in SIMPLE recovery model, log will be truncated frequently and SQL SERVER won't let it get "TOO FULL" i.e. it won't grow beyond 70% of its allocated space. That's what the SIMPLE recovery model does!! The is consistant with the DBCC output which says that the log is only 12% used.
What you have said in you pst that you are doing some operations on the TEMP table. I am guessing that your tempdb is getting full. See if your drive which holds the .mdf and .ldf files for the tempdb are out of free space and if the log is full for the same. Try to free up some space on these drives or try to move the tenmpdb somewhere where you have enough space. Hope this helps you.
Regards.
June 11, 2004 at 7:47 am
Since the DB is on the hosting companies server i cannot work on the tempdb.
I checked the size of the TempDB using DBCC and its almost 59% full.
June 11, 2004 at 8:25 am
You might want to check your transaction log size settings in the database properties. The hosting company only allows you 17 megs of space, but is the database set up with that limit? Is it set to autogrow? If not, that could be why the log is continuing to grow even though you are in SIMPLE recovery mode.
...at least I think so...maybe.
June 11, 2004 at 12:04 pm
Even if the "Database Recovery Model" is set to "SIMPLE" you can still grow the transaction log file until there is no more disk. If a lot of data is being written to the transaction log file within one transaction, SQL Server cannot truncate itself until the SQL Server sees a COMMIT.
BACKUP LOG <database> WITH TRUNCATE_ONLY: Will only truncate the data that has been committed or checkpointed to the database. If you need to free up space, you will want to do the SHRINKFILE. Before you do that, you should find out what TRANSACTION grew the file large. It won't do any good to shrink the file if another long running transaction causes the file to grow again.
Also, you cannot have TRANSACTION LOG BACKUPS when the database is in "SIMPLE" recover model. If you need to do a point in time RESTORE, you won't have any log files. Most production systems should be in a "FULL or BULK/LOGGED" recovery model. If the transaction log file is highly active, then you may need to schedule the backups more frequently so that the used space within the file stays slow. Remember, if the file grew because of a long running transaction, it doesn't matter which recovery model you use, you will run out of disk if you do not have much disk to work with. To fix this, the long running transaction should be broken out to multiple smaller transactions. This will allow SQL Server to checkpoint itself.
June 13, 2004 at 3:19 am
This is a reply of Jeremy (Member of SQL Magazine Forum):
SQL Server sometimes has a problem with shrinking the TLog. If you are not backing up the transaction log as a part of your data recovery plan, I recommend having truncate log on checkpoint enabled. This will help ease further issues.
For unruly database transaction logs, I use the following methodology:
1. First try to shrink the tlog using DBCC Shrinkfile (tlog_filename). After this has completed, run DBCC
loginfo on the database of interest. If the last record returned has a status = 2 (active) and there are several entries before it with a status = 0 (inactive) you will need to run some transactions to push the active entry to
the beginning of the log (step 2).
2. Run the following script
USE your database_name
GO
CREATE TABLE log_shrinker (char1 char(4000))
GO
DECLARE @i INT
SELECT @i = 0
WHILE (1 = 1)
BEGIN
WHILE (@i < 100)
BEGIN
INSERT log_shrinker VALUES ('a')
SELECT @i = @i +1
END
TRUNCATE TABLE log_shrinker
BACKUP LOG database_name WITH TRUNCATE_ONLY
END
This is a script which contains an infinite loop, which will need to be stopped manually. It will push the active
entry to the beginning of the log so that the rest of the log can be truncated. I recommend running it for a minimum of 40 minutes for an 8GM Tlog and then stop it manually.
3. Try running BACKUP LOG database_name WITH TRUNCATE_ONLY
4. Now run DBCC Shrinkfile (tlog_filename) again, if the tlog hasn't already shrunk
5. If the Tlog is still huge, run the script in step (2) again. This should do the trick.
Note: This is a manual process which requires you to stop the infinite loop query. I have used it many times to shrink both live(production) databases as well as development databases which have grown too large.
Sometimes the script in step (2) needs to be run for a longer time. You can periodically check the tlog using DBCC loginfo to ensure that the active entry in the log is moving to the beginning of the log file.
Hope this helps!
Alamir Mohamed
Alamir_mohamed@yahoo.com
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply