December 24, 2009 at 2:42 am
Hi this is Raj.
In our organization one of our ms sql 2000 servers, log space used dramatically increasing to 70% which inturn the affecting the performance of the t-sql.
We did a work around to resolve this issue. we have written a procedure to back up log every 5 minutes. But sometimes within five minutes it reaches 70% and affecting the performance of the client.
Please suggest me how to solve this.
Regards,
Rajkumar
December 24, 2009 at 3:25 am
What is the size of the Database?
What is the size of Data and Log files?
Size of the Tlog depends on the nature of the application and what activities are running on the database, open transactions, maintenance etc.
How you can say it's affecting the perfromance of the T-sql?
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
December 24, 2009 at 4:21 am
Database size is 133 GB
Data file size is 140GB and log file size is 4MB (2 log files).
our organization is a BPO. Hope now you can imagine the transactions that we would have in the client application. we have vb6 client application which updates the details of each and every transaction in a table.
Update is based on the user login time and user id. for each transaction we would be updating a min. of 10 columns (mostly integer and bit columns).
when we trigger the update procedure through the vb6 app, some times it takes 5 seconds to complete the update.
we have been using this same procedure for years without issues.
the issues started after we moved the databases to a different server
[From: DL 385G1 - To: ML530 - 4 GB RAM] and encrypting the server volumes for a client audit.
Please let me know if you need any other info to help me out.
Many Thanks
Raj
December 24, 2009 at 4:35 am
Log file size of 4 MB is very normal compare to total db size 133 GB. Even if log file size is 4 GB due to typo mistake It is fine.
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
December 24, 2009 at 4:58 am
Like mascot said a Log file size of 4 GB is acceptable for a 133 GB database. I believe the issues you are facing is the updates/inserts are taking longer time to execute than before. This is expected as your db size grows, the number of users grow and the number of insert/update increases. I dont think there is any point in taking log backups as frequently as 5 min. This is going to increase the load. If this is the case why dont you try archiving the old data.
Any particular reason why you think the increase in log files is causing the T-sql performance issue ?
December 24, 2009 at 5:39 am
I do agree 4 MB is too less for a log file. But its been working fine without issues for years. Our DBAs changed the size of one of the database log files to 200 MB. but during production hours again reduced to 4MB automatically, which dbas not able to identify the reason.
Why i mentioned the t-sql performance is, once we backup the log the we see the utilization (cpu and logspace) goes down and increase in performance.
To make it clear, find below my apps functionalities...
The vb6 client app is integrated with dialer and to route the calls to the user. the user will submit the outcome of the call as each trasnaction. so we are capturing 15 fields like totalcalls taken, total sales, total talktime etc., for the respective users.
We have updates when the call pitches in to the user and when user disposes the call.
Whenever the log space increases, dispose event took more time and by that time the next call pitches in. since the dispose event is delayed, our vb6 app willn't display the info about the customer for the next call leading customer hung up.
when we monitored these updates through sql profiler, we see that some times update queries took 5 seconds (duration). We have experienced I/O reads too going more than 30000 for normal select count(*) in a properly indexed tables.
Please do the needful.
December 24, 2009 at 7:19 am
Is your transaction log running on its own disk?
I am not sure however that your performance issues are related to the t-log.
You should analyse the execution plan of the problem queries this will give you a clue to where the performance issues may be.
December 24, 2009 at 7:49 am
Thanks for the reply steve.
our transaction logs are running in the same disk where our data files are residing as we don't have separate partition for that.
we have analysed the problem queries through profiler, a normal "select @@identity" itself consumes more i/o time. But not all the time. Only during the peak production hours.
December 24, 2009 at 8:19 am
Okay well I would look at putting the log on a seperate disk if possible.
December 24, 2009 at 10:43 am
rrajkumar17 (12/24/2009)
I do agree 4 MB is too less for a log file. But its been working fine without issues for years. Our DBAs changed the size of one of the database log files to 200 MB. but during production hours again reduced to 4MB automatically, which dbas not able to identify the reason.Why i mentioned the t-sql performance is, once we backup the log the we see the utilization (cpu and logspace) goes down and increase in performance.
To make it clear, find below my apps functionalities...
The vb6 client app is integrated with dialer and to route the calls to the user. the user will submit the outcome of the call as each trasnaction. so we are capturing 15 fields like totalcalls taken, total sales, total talktime etc., for the respective users.
We have updates when the call pitches in to the user and when user disposes the call.
Whenever the log space increases, dispose event took more time and by that time the next call pitches in. since the dispose event is delayed, our vb6 app willn't display the info about the customer for the next call leading customer hung up.
when we monitored these updates through sql profiler, we see that some times update queries took 5 seconds (duration). We have experienced I/O reads too going more than 30000 for normal select count(*) in a properly indexed tables.
Please do the needful.
Did I misunderstand - or did you state here that you increased the log file to 200MB and it *automatically* reduced in size back to 4MB? If that is the case, I can see why you are having I/O issues as the file is constantly being grown and shrunk (very I/O intensive operations, not to mention the fragmentation and multiple VLF file issues).
If that is the case, make sure you disable auto shrink and auto close on the databases. You should *never* schedule a shrink operation, nor should you ever enable auto shrink on a production database.
Either way - a 4MB log file is just way too small to handle any significant amount of transactions. How large it needs to be depends fully on how many transactions you are performing and how often you are backing up the transaction log.
How often you backup is determined by the business requirements for disaster recovery - and not by how large you want the file. If you only have a requirement to recover to within the last hour - then you only need to run log backups every hour. You can run more frequently, but you cannot run less frequently.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
December 28, 2009 at 2:17 am
Thanks for your reply Jeff.
I have checked the database properties, both Auto shrink and auto close are unchecked.
Daily differential and bi-monthly full-backup is the backup plan that we follow now.
We have encrypted the server volumes with the help of truecrypt tool [part of an audit]. do you think that, volume encryption could be the cause for this? Because these issues started after encryption?
December 28, 2009 at 7:02 am
As far as I know - SQL Server does not support databases on encrypted volumes using Truecrypt. I would bet that is the cause for your problems.
I still don't understand how your log file is being automatically shrunk. If this is not a setting on the database, then someone has automated the process and is performing a shrinkfile or shrinkdatabase. Either of those should never be automated - especially on data files which will cause index fragmentation when performed.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
December 28, 2009 at 7:09 am
Thanks Jeff!!
I will check that and come back to u with the updates
December 30, 2009 at 5:58 am
1) tlog should NOT have 2 files in it.
2) tlog should NEVER grow under normal operations. make it as big as it needs to be and leave it at that size
3) I can assure you that it is not the FULLness or addition of stuff to the tlog that is making your system slow (unless the tlog file is on horribly slow IO subsystem - which you can check by doing a fileIO stall analysis). It is whatever actions are CAUSING the tlog entries that is making things slow. All those updates you mention for example. Tune your app, and start by finding your slowest queries or queries that take the most IO and see if you can refactor or improve with indexing. Check data file IO stalls as well. Do a waitstats analysis to see where the system bottleneck is.
4) for 3 above, I highly recommend getting a performance tuning professional onboard to review your system and apps. You can spend days or weeks going back and forth on a forum or let a pro work for a few hours or a day or two. You would get the benefit of learning from him/her as well.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
January 4, 2010 at 6:44 am
Thanks for your suggestions Guru!!
I have already put forward my recommendations to get a sql professional to my senior management and waiting for the reply.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply