Introduction
You cannot
do a proper tuning job on SQL server in two minutes, but you can get a good
idea of how well a server is set up. This article will show you how to quickly
evaluate the efficiency of memory usage on your server.
The script
I use works both on SQL 6.5 and 7, though the results you get and the actions
you can take differ for each version. Some of the commands are in SQL 7 for
backward compatibility only, and MS recommend you use Performance Monitor
instead, but I still like to use these commands to get a quick idea of what is
happening.
Here is the
script:
dbcc traceon(3604)
go
dbcc sqlperf(lrustats)
go
dbcc bufcount(1)
go
-- dbcc memusage – Comment
out for SQL Server 7
go
dbcc proccache
go
dbcc traceoff(3604)
dbcc
traceon simply
turns on the output for some of the subsequent commands – without this you will
not see all the results you need. The Traceoff command at the end turns this
feature off again.
dbcc
sqlperf(lrustats) outputs
details of the cache performance. The most interesting figures are the “cache
hit ratio” – which should be near to 100% as possible, and “cache flushes” –
which tells you the number of times that data in cache has been paged out to
disk to make room for other data. In an ideal world this would be zero, but if
your database sizes add up to more memory than you have physical RAM allocated
to SQL Server then this will not usually be the case.
If these
figures do not look so good, the first place to look is the “Sp_configure
memory” command to see if SQL Server has enough memory allocated to it. This is
particularly important with 6.5, which has a very low default setting, but less
likely to be the cause of problems with SQL 7, which manages it’s memory
automatically. See the “Causes of cache confusion” at the bottom of this
article for other possible causes.
How much
memory you allocate to SQL Server 6.5 depends on what else your server is doing
as well as the total available memory. This Microsoft
article contains guidelines and example memory allocation settings.
dbcc
bufcount(1) SQL 7
users can skip this one as it’s configured automatically. SQL 6.5 users should
look for a line like “The Average Chain Size is: 2.922601” in the output. This
indicates the efficiency of the cache indexing structure – values between 2 and
4 are OK with 3 being the optimum in SQL 6.5. The adjuster for this is
“sp_configure hash buckets”, and because this is poorly documented in SQL 6.5
Books Online many sites still have the default setting - Read
more about this setting at the Microsoft support site.
dbcc memusage details the largest tables
currently in your cache. Ideally these should be pretty stable, so make a note
of what objects are in cache, and the space they take up, over a few readings.
SQL 6.5 will also detail the largest Stored Procedures in your Procedure Cache
- more on this below.
7 users be warned – this article says not to run the dbcc memusage
command on SQL 7. I have never had
problems with it myself, but I would advise you not to use it on production
servers.
dbcc
proccache gives
more succinct details about the Procedure cache usage. Again SQL 7 handles this
pretty well on it’s own, but SQL 6.5 usually needs some help. This is because
SQL 6.5 by default allocates 30% of it’s total cache memory to handling
(mostly) stored procedures, while the rest is give over to data cache. With
large memory configurations this can result in a lot of wasted space in
Procedure Cache that could be better used more effectively as Data Cache. The
adjuster for this is “sp_configure procedure cache”.
Take
care when reducing this setting – setting procedure cache too low can kill your server. Reduce it
slowly and carefully, and never ever set it to zero. Here is a Microsoft
article on the subject.
Causes of cache confusion
The most
obvious reasons for inefficient cache usage are lack of available RAM and poor
memory configuration, but before you rush out and buy some new DIMMS think
about some other possible causes.
- Missing Indexes can cause
unnecessary table scans, which clear other data out of cache. SQL 7 users
can consider using SQL Profiler and the Index
Wizard to spot table scans and improve index efficiency. Unfortunately
SQL 6.5 users have to do this the hard way.
- Poor database design and poor
query design can lead to table scanning, excessive disk IO and inefficient
memory usage.
- Mixing databases with OLTP
(short transaction) and OLAP (long report) characteristics on the same
server can cause inefficient cache use and other problems
Here is a
good article
from Microsoft on the importance of good database and query design.
Finally,
remember that caching statistics are meaningless if your server has just
started. Cache efficiency increases over time, so let your server “settle down”
into it’s normal usage pattern before you try to measure efficiency.
Other memoryconsiderations
Cache is
one of the most important uses of RAM, but it by no means the only important
one, and SQL Server allocates memory to cache only after other requirements are
met.
It follows
that you can increase or decrease the amount of RAM available for cache by
adjusting the memory allocation for other aspects of SQL Server, particularly
with version 6.5, which allocates memory in a less dynamic way than later
versions.
Some
example settings you can change are Sort Pages (6.5 only) Index Create Memory
(7 only) Locks, Open Objects, Tempdb In RAM (6.5 only, and usually
not recommended) There are others too, and as you read more in Books Online
or http://support.microoft.com Be
warned though: an incorrect setting for any one these can kill your server, so
read up first, and adjust things slowly and carefully.
About the author
Neil Boyle is an independent SQL Server consultant working out of London,
England. Neil's free SQL Server guide is available on-line at http://www.impetus-sql.co.uk