February 22, 2008 at 4:10 am
Hello,
We have a OLTP system mainly used for tracking telecalling data, This is being accessed by a web page developed in .Net. Currently users access the server (web application) only from outside the office.
Sometimes the user complaints about slow response of the system. It can be due to fluctuations in the connectivity or any other network issue.
From database point of view we have scheduled procedure to optimise the indices on the table,truncate the growing log and backup on regular intervals.
I got suggestion from one of the colleagues,who is not a database professional, that it is a practice to restart the database server at least once a week for OLTP systems.I have never heard about any such resolution.
Is it really recommended to restart the server physically where SQL server 2000 is installed?
Please suggest.
Thanks in advance.
Regards
Rohit
March 3, 2008 at 6:47 pm
Hi Rohit,
Generally, no, you shouldn't need to reboot your database server weekly to regain performance. If you find that you are seeing a performance increase when you restart, then most likely the server requires more memory as SQL has slowly grown to fill it. If you have free RAM before you restart, it may be the case that SQL is not configured to use all the available memory.
Also, you may find that the tempdb is growing and filling the drive it is on (the tempdb is destroyed and recreated when the server is restarted). If tempdb is filling C: then this will really hamper performance as the system in general will suffer as C: space is consumed.
Recommendations would be to make sure tempdb is not stored on C:, SQL is not running low on RAM and paging to disk, and that you have your logs and data files separate with enough room to grow. That should be the basic requirements you need to keep your server running for longer than a week.
Hope that helps!
--------
[font="Tahoma"]I love deadlines. I like the whooshing sound they make as they fly by. -Douglas Adams[/font]
March 3, 2008 at 7:16 pm
Fluctuations in connectivity or other network issue? Have you got proof that's what it is? My bet is on some code that runs in the background somewhere...
--Jeff Moden
Change is inevitable... Change for the better is not.
March 3, 2008 at 7:21 pm
And, because of the way it sounds like you're doing things, it could also be when Auto-grow runs... especially on your TempDB. Preset TempDB big enough so it NEVER has to grow... the whole world waits when it does.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 4, 2008 at 1:01 am
You say you truncate the log. What exactly do you mean by that?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 4, 2008 at 1:23 am
Thanks everyone for valuable information.
I am running following statements to truncate the log and database size
Dump Tran with no_log
DBCC Shrinkdatabse (' ')
March 4, 2008 at 2:55 am
Dump Tran with no_log
DBCC Shrinkdatabse (' ')
These are 2 statements that you should avoid if you want good performance and recoverability.
If you are running with Full or Bulk Lokkged recovery mode, you should never do a Dump Tran with no_log as this destroys your recovery chain.
You should look at your recovery SLA to decide if you need Full or Bulk Logged mode. If your recovery SLA says you must recover to the time of failure you need Full recovery. If you are OK to recover to the last full backup with any data after that totally lost, then you can use Simple mode.
If you need Full recovery, then you must tackle your log growth problem by doing Transaction Log backups. One way to decide how often you need to do a tran log backup is to decide how big you want your log file, then do backups frequently enough to stop it growing above that size. Alternative, run them every hour.
Any database shrink activity will disorganise your database internally, which will harm your performance. It will also result in Windows disk fragmentation, which also harms performance. Just about the only time a shrink is justified is when you remove large amounts of data from your DB, and do not expect it to grow again for at least 3 months.
I suggest you look in BOL about these issues, and also Google them, as there is a lot of good advice to be found.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
March 4, 2008 at 6:21 am
Ed is correct... and I go one further... what are the growth setting for your databases... I'm interested in all of the databases but I'm particularly interested in the growth settings for the MDF and LDF of TempDB.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 4, 2008 at 6:33 am
Do you get a similar performance boost when you simply restart the SQL Server service rather than restarting the server?
Definitely treat your logs with respect 🙂
You could also try running profiler to see what's happening when your server has the fluctuations in performance. Perhaps you have some extra queries hitting the machine?
March 4, 2008 at 6:38 am
10% Growth for mdf and ldf files of tempdb database is allocated. Maxsize is unlimited
When performance is said to be down number of users are deifinately high. But it is not always the case when equal number of users are connected.
Also sometimes internet connectivity probmes / fluctuations causes the system at client side to be slow.
March 4, 2008 at 6:42 am
For OLTP transaction database the growth percentage is 25%
March 4, 2008 at 6:55 am
Rohit Chitre (3/4/2008)
10% Growth for mdf and ldf files of tempdb database is allocated. Maxsize is unlimitedWhen performance is said to be down number of users are deifinately high. But it is not always the case when equal number of users are connected.
Also sometimes internet connectivity probmes / fluctuations causes the system at client side to be slow.
Ok, I can tell you that is a major problem right there. Chances are that you've also left the default intial size setting of 1 MB, as well. So, the server get's restarted somewhere along the ways... temp db is rebuilt starting out at 1MB. If you do the math, it will create 73 fragments on temp db just to grow to 1 GB which is not all that big a TempDB. Guess what happens when growth of TempDB occurs? Everything in the system waits. The likely time for TempDB to grow is when you have a lot of users online.
So, step 1 is to figure out a decent size for TempDb and set to about 1 gig larger than that. Just as a guidline... our system is a tera-byte database... we preset TempDB to be 12 Gig on bootup. It's disk space very well spent.
Same thing goes for your OLTP database... 25% growth of what? You keep shrinking the database and it keeps trying to grow. It's like trying to put a wet Alka-Seltzer back in it's foil wrapper. Everytime you shrink it an it grows, you've just added another physical fragment to the hard disk. That's just not good for performance.
Manually size your databases to a reasonable and expected size. Set a growth of 250 or 500 MB (NEVER use % growth). Defrag your drives and your database. Performance will improve especially when you have lot's of users.
Then, you can start working on the code that also causing performance problems during peak periods. 😉 Start by looking for anything that uses a cursor, while loop, or correlated subquery. Then, start looking for other forms of RBAR especially UDF's in the SELECT list.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 4, 2008 at 11:31 pm
Rohit Chitre (3/4/2008)
Thanks everyone for valuable information.I am running following statements to truncate the log and database size
Dump Tran with no_log
DBCC Shrinkdatabse (' ')
Ouch.
The first of those ensures that you will not be able to do a restore to time of failure if your database crashes. Any log backups taken after tha point are useless until a database backup (full or diff) is taken.
Do you have log backups? If not, and there's no need to restore the database to anything other than the time of a full/diff backup, then set the recovery mode to simple.
The second is playing shuffle with your indexes. After a shrink, all your indexes will have a fragmentation around 99%. That could very likely be the cause of some of your problems.
Size your databases (including tempDB) properly, and don't shrink them.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 4, 2008 at 11:52 pm
Right now I have not scheduled the log back up.Full back up is scheduled after every couple of hours with recovery option "Full".
If I start taking log back up every hour. then will it be advisable to truncate & shrink only the log file.
Currently I am also running a SPOC after Shrinking is over, to defrag the indexes of all the tables.
Also is it the reason to get some performance improvement when even SQL Server service is restarted?
March 5, 2008 at 12:22 am
Rohit Chitre (3/4/2008)
If I start taking log back up every hour. then will it be advisable to truncate & shrink only the log file.
No. It is never advisable to truncate and shrink the log.
If you truncate the log, you're invalidating the log backup chain. Shrinking the log just forces it to grow again, causing occational slowdowns and possibly causing external (file-system level) fragmentation.
With regular log backups, the log will get to a stable size where it doesn't need to grow any more.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply