March 30, 2007 at 1:23 pm
Hello,
We're having problems with performance that I feel is attributed to several things, but I need assistance as to how/where to start in optimizing the server.
Here's some background on the server setup:
It's SQL Server 2005 on a Windows 2003 Server Standard Edition. The server has three drives, C, D, and F. There's only one database that is 95GB. There database has a data file on the D drive, which is only a little over 100GB so we created another datafile, made it the default on the F drive(has 116GB so plenty of room). The log is also on the F drive, but at this time there are no other drives so we had to go this route. The recovery model is set to simple since we don't need point in time recovery and the log was affecting performance even more. The server has 2 processors, but the system is setup to see 4. The C and D are on a RAID 5 and the F is RAID 1. I'm thinking this would contribute to performance issues?
Some of this is kinda a mess, but we're limited at this time as to the hardware available, the drive space available.
Any recommendations as to what kind of changes in terms of infrastruture would be appreciated.
As for the performance, it seems there are several processes/queries that are just eating up the system when they're run. We're going to look into them. Any suggestions as to how to better index/optimize these would be appreciated.
Also, I ran the perf. monitor and found that the Physical Disk-Average disk queue length is always high when these process are ran. The Memory pages/sec would skyrocket. The server has 4GB of RAM so I'm not sure why memory is an issue. I set the minimum memory for SQL to 2GB, but that didn't seem to help. It seems SQL didn't seem to be using very much RAM at all, or that it can't "see" the RAM that it needs. The Memory-Available MBytes shows over 2GB consistently.
Any suggestions would be awesome! Thanks!
March 30, 2007 at 2:16 pm
Put the query in DTA and run it. Look at the recommendations and test to c if any of them will work.
-WM
March 30, 2007 at 3:18 pm
If you're doing lots of IO, then the best thing you can do (other than optimize your queries and indexing) is to add memory. Looks like the SQL server has to go to disk to find the data. You don't mention whether your applications are insert intensive. If they are, you might want to look at the possibility that you have too many indexes rather than not enough. Look in particular as to whether clustered indexes are set so that each insert has to update the location of the data. If they are, look into increasing the fillfactor of your indexes.
Carlos
April 1, 2007 at 6:13 pm
1) Raid 5 is HORRIBLE for the boot (C) drive. I assume your OS page file is there, as well as tempdb. RAID 5 is suboptimal for both of those.
2) Putting a new file and file group on the F drive doesn't do any good if you don't move stuff there. Add a clustered index to tables and specify the new file group to effect this.
3) NTFS does NOT like to have < 20% free space on a disk. Performance starts to significantly degrade below this.
4) Unless you have specific settings (AWE) enabled, SQL Server standard edition 32 bit will not use more than 2GB ram. You need PAE/AWE to get it to take it's limit of 3GB.
5) Of course 2GB will be limiting performance when you have a 95GB database to chew through. Physical I/O is usually 80+% of a databases performance problems in one way or another.
6) Are your tables properly indexed to address your worst querries? Note that you may not have ANY recourse with that much data and so little ram and disk I/O capabilities.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 2, 2007 at 2:21 am
As mentioned, check AWE/PAE to ensure SQL can use over 2GB memory.
With 4GB memory, set the max server memory to aroung 3GB. Limiting the OS's memory to under 1 GB, especially if you're using AWE, is not a good idea.
Make sure you're using locked pages for the buffer pool. Check the beginning of your error log. There will be a line stating whether or not SQL is using locked pages.
I'd recomend tyou get another drive set if possible. Putting data file and log file on the same drive is not recomended. especially for active databases.
Using perfmon, check the disk sec/read and sec/write. They should be below 10 ms. Above that indicates problems with the IO subsystem.
You can use the database tuning adviasor, but take its recomendations with a pinch of salt. It often does overkill.
Otherwise, run each of the queries in management studio and switch the exec plan on. Look for table or clustered index scans. They can indicate 'missing' indexes. You can also check the missing index dmvs (sys.dm_db_missing_index_groups, and others), but again, take it with a pinch of salt.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 11, 2007 at 10:51 am
Jubilee,
You've got your tlog sitting on the same drive that your second (default) data file. This isn't a good idea as I guess there will be a lot of disk contention going on. Even in 'simple' recovery SQL Server makes heavy use of the tlogs. You may want to see if there is some way of getting your main DBs tlog on its own mirrored pair (as a minimum, as you may need more disks depending on the tlog's throughput).
Be careful about taking all the memory for SQL Server with the /3GB. I tend to leave a min of 1.3 - 1.5 GB for the OS. If you search under 'colin leversuch-roberts' he has good explanation of why.
HTH
iwg
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply