June 7, 2004 at 3:34 am
Hello
We have a development system which is backed up every night but for some reason the transaction log has grown from 5Gb to 17Gb over a weekend when know one is accessing it. We have currently run out of space on the drive therefor getting into SQL has become impossible. Can I delete the ldf file and then recreate it using the same name? Our last full backup was from five days ago since then hardly any transactions have taken place.
Also how do I trace exactly what has made these logs so big?
June 7, 2004 at 5:57 am
Martin,
A detach/attach will create a shiny new transaction log file. I'm assuming here that you have just one data (mdf) and one transaction log (ldf) file for this database.
To be sure, to be sure, first take a full backup before dealing with the transaction log.
Then, detach the database.
Preferably, if you still have several mb left on the drive, rename the old .ldf, otherwise you may have to bite the bullet and delete it.
Attach the database, specifying just the .mdf file. You'll get a warning message and a new .ldf will be created.
Check access to it, then delete the old .ldf.
You wouldn't have any rather big reorgs scheduled for the weekend through maintenance plans would you?
Cheers,
- Mark
June 7, 2004 at 5:58 am
If it is a single log file database, one option is sp_detach_db, delete the log file, than sp_attach_single_file_db, it will creates a new log file.
Check BOL for detail.
June 7, 2004 at 7:40 am
And once you get this issue solved and are back up and running...you need to decide if you need the transaction logs for recovery. If so, you need to back them up more regularly. If not, change the recovery mode to SIMPLE.
-SQLBill
June 7, 2004 at 8:17 am
All fixed now. I am carrying out more regular transaction logs, however how can I trace exactly whats going on to create a huge, fast growing log file?
June 7, 2004 at 8:40 am
You can use SQL Server's Profiler to run a trace.
You could create a trace that tracks the commands being run on the database, but be aware that this can fill up your hard drive very quickly. I tested it once and was getting 500 mb of data every second.
You could test it first to see how much data you will be getting. Then find a way to narrow it down. Or you could try getting the information on who/what is logged on at the time and start from there.
-SQLBill
June 7, 2004 at 4:22 pm
If using Profiler you may want to start with monitoring "Log File Auto Grow" (and use the filter to restrict monitoring to the database in question).
Once you've narrowed down WHEN it's happening you can then run profiler with a wider scope of events during that period to determine WHY it's happening.
Cheers,
- Mark
June 8, 2004 at 4:11 am
I can almost gurantee that you are running a maintenance job which contains a rebuild of indexes. This always increases the log excessively.
You need to set up a good indexing plan to minimise the log.
For example, set an appropriate fill factor for the indexes.
Perhaps reindex one seventh of the tables each day. That way you rebuild the indexes once a week. Also switch to bulk logging before you reindex and full afterwards. This will cut it down by a third.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply