A properly configured SQL Server can mean the difference
between a sluggish server and one that runs well. There are a few pitfalls
that you can experience in doing this. Keep in mind also, if your application's data access
is slow, I've found 75% of the time, optimization of the queries help over optimization of the configuration options.
The first thing we must do is set show advanced options on. If this option on your server is set to 1 (true) then you are already fine for this step.
If not, run the following query :
sp_configure "show advanced options", 1
reconfigure
Make sure that you set this back to its original state after we are done with this. This is a dynamic option, meaning that it take effect immediatly.
If you configure a dynamic option, all you need to do is exit the configuration window and come back into it to see the advanced options.
Allow Updates
The Allow Updates option allows a DBA to modify the master database via TSQL or directly.
This option is turned off for your benefit. I keep turned off until I need to do modifications to the master.
I've received countless numbers of emails from people that pulled up an ISQL window and performed an action on a table by accident because the default database was the master.
This is a dynamic option.
Default Sort Order ID
I honestly can't figure out why Microsoft put this configuration option in SQL Server. NEVER CHANGE THIS OPTION!
The sort order for your server controls how the data is physically stored inside your databases.
The only way of changing this option is to rebuild your master database. The default is 52 (case insensitive). The company I work for has all of there servers set to 54 (accent-case insenstive).
The problem with this is that if a vendor sends us a database in sort order 52 format, you can only restore that database on a server that is set to 52. I have a staging server set to sort order 52, then
transfer the objects through DTS or Object Transfer to my server running the 54 sort order. Needless to say, if you can get away with
using the default settings, leave this setting alone during installation of SQL Server.
This option requires a rebuild of master.
Fill Factor
This option configures the default fill factor. I leave this setting alone due to the control
you have at the individual index level to set this.
Fill factor in the simplest of terms means how much data SQL Server will put on each data page when SQL Server creates an index.
This value is a percentage and also works when you are rebuilding an index. An index that is set to 100% will experience so problems due to the overhead of having to create
a new data page when you need to insert a new value. A fill factor set to a low number, say 50%, will also experience problems with memory. Even if a data page is only
using 50% of a page, it is still considered full.
This option requires a SQL Server restart.
Memory
Memory is the most critical of your settings. This has been simplified extremely in SQL Server 7.0 with the introduction of scaling memory configuration.
It is also the most difficult to set. SQL Server 6.5, much like other DBMSs, allocates the memory it needs on startup.
You are going to have to figure out what type of overhead your NT Server and any other applications on the server is using first.
You can do this by opening Task Manager during a peak time. Go to processes and click on Mem Usage to sort. Add every option other than SQL Server and allow some buffer for unknowns.
This is also an option you will want to discuss with your NT administrator. He may know of other things that will be installed on the server.
The memory is allocated in 2k blocks. So if your setting says 8192, it is actually saying 16 megs (8192/1024 * 2).
This option requires a SQL Server restart.
Priority Boost
This option will determine whether SQL Server will have priority over other applications on the server.
I have never had any luck setting priority boost to true. The reason for this is that NT and SQL Server already share many threads.
The priority boost has lead to many blue screens and server locks.
This option requires a SQL Server restart.
Procedure Cache
This option is a percentage. This is the main area where you can designate where you would like the memory to be spent.
Procedure cache is the amount of memory that is assigned to caching stored procedures. Running a DBCC PROCCACHE
will tell you how much memory you're currently using. Also, running DBCC MEMUSAGE will report which stored procedures are using the memory.
This is really a trial and error setting highly depending on how much your application relies on stored procedures.
The key here is that whatever percentage you use here, the remainder will be used for data caching, which is also very important. I've found that a procedure cache
setting of 15-20% is usually sufficient.
This option requires a SQL Server restart.
Tempdb in RAM
This option will designate how much of your Tempdb database is held in RAM. This option has been removed in SQL 7.0.
I would recommend that you never use this option.
This option requires a SQL Server restart.
User Connections
The user connection option is also critical. Running an sp_who on your server during peak times will tell you how many user connections you are currently holding.
Setting this option too low will result in application errors, and many upset users at your office door. This option is now scaling in 7.0.
This option requires a SQL Server restart.