June 12, 2006 at 1:42 pm
Forgive my ignornace, but I've already been to BOL and I still don't really understand what to do.
We have an Application that has gotten extremely sluggish as of late.
Then SOftware Vendor has suggested that we Truncate the log to improve performance.
This is what I have:
From EM, it shows the Database as being 122mb in size with 0.00 space available. both the log and data files are set to grow automatically by 10%
The Database is in SIMPLE Recovery Mode.
I have plently of Harddrive space on both the drive where the Data file lives and on the Drive where the logfile lives.
Any suggestions on Truncating the log?????
June 12, 2006 at 1:50 pm
support.microsoft.com/?id=307487
June 12, 2006 at 1:53 pm
Thanks! That article discusses shrinking the TEMPDB database. Is that the same as shrinking a User DB that is set to Simple Recovery mode.?
June 12, 2006 at 1:58 pm
I am sorry that link was not the right one . You can use the following where it shows you how to truncate and shrink.
http://support.microsoft.com/default.aspx?scid=kb;EN-US;272318
June 12, 2006 at 2:19 pm
What sort of performance improvement were they expecting to get from a smaller log? Do they know where the performance bottleneck is, or is just a guess?
June 12, 2006 at 2:33 pm
Just to brief on truncate: it only free's up space on the log but the file sixe will be the same unless u shrink the log.
U can run backup and shrink the file/log. Performance might have nothing to do with the size of log mentioned. There are various ways of looking at performance:
Query is slow: Check/updates statistics/indexes. Check for ways to optimize sql by using index hints. More to go...
June 13, 2006 at 6:18 am
Are you shrinking very often?That has been known to cause issues
Mike
June 13, 2006 at 6:24 am
I have never shrinked this DB. It just kind of plugs along.. It's always been slow.. and now it's getting more sluggish. my boss called the Vendor and they said to truncate the log? The DB is set to simple recovery mode (by their design) so I guess they don't know what they're talking about!!!
We use this application to Track Helpdesk tickets and do Computer Inventory.. so I guess you could say that it's more transactional than static..
I've been trying to do some maintenance on the DB.. This is a shot of one table AFTER I did a reindex and indexdefrag against it.. doesn't seem right to me.. seems the Scan density should higher after the maintenance..
DBCC SHOWCONTIG scanning 'smRoles' table...
Table: 'smRoles' (1650104919); index ID: 0, database ID: 23
TABLE level scan performed.
- Pages Scanned................................: 2
- Extents Scanned..............................: 2
- Extent Switches..............................: 1
- Avg. Pages per Extent........................: 1.0
- Scan Density [Best Count:Actual Count].......: 50.00% [1:2]
- Extent Scan Fragmentation ...................: 50.00%
- Avg. Bytes Free per Page.....................: 7312.5
- Avg. Page Density (full).....................: 9.66%
I keep thinking maybe I should put the DB in FULL recovery mode and start backing up the xaction log regularly.. but I don't know if that will help performance!??!!
June 13, 2006 at 10:36 am
Sluggish? It sounds like your vendor may be giving you the good ol run around. What led your vendor to this conclusion?
What's the problem you are trying to solve (elaborate on sluggish)?
As for truncating the log as your vendor mentioned here is the syntax to do that (SQL 2000). You can schedule or do it once from Query Analyzer
BACKUP LOG [Database Name] WITH NO_LOG
from BOL: After backing up the log using either NO_LOG or TRUNCATE_ONLY, the changes recorded in the log are not recoverable. For recovery purposes, immediately execute BACKUP DATABASE.
(In simple recovery mode the earlier is not as relevant since in simple recovery you would restore your most recent backups / incremental backups, but won't restore logs)
Jason
P.S. Two other things, your database shows 0.0 left, is there a limit on growth set (Proably not or you'd get errors when inserting)? And, your sluggishness sounds (w/o all the details) more related to the fact that each update/insert is going to expand the database because it is at capacity.
My first try would be to increase the size of the database so that it is not at its limit for every transaction requiring growth.
Remeber if you increase the size of your datafile to increase the size of your log file to at least 25% of the size of the datafile.
-
June 13, 2006 at 11:12 am
The transaction log size has virtually nothing to do with an applicatino being sluggish. You are better off starting off with my 'short list' for tuning:
Round I
DBCC UPDATEUSAGE
UPDATE STATISTICS (with FULL scan) for all tables
exec sp_recompile for all tables
exec sp_refreshview for all views
Round II
DBCC DBREINDEX
UPDATE STATISTICS (with FULL scan) for all tables
exec sp_recompile for all tables
exec sp_refreshview for all views
Round III
Profiler
Query Hints
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply