September 7, 2003 at 1:27 pm
I have a brand new install of SQL 2000 Enterprise with SP3 on Windows 2000 Advanced Server SP4 on a Dell PowerEdge 2650 with Dual Xeon 2.6G CPUs and 2GB RAM. This is for a a single application database with 12 end users. The problem is that we are getting extreme latency pulling up data from tables that are not that large. We have checked the network connections, Network card, and workstations, and narrowed it down to a server issue. SQL will keep eating RAM up to about 1.7GB. I understand that SQL will use as much RAM as you allow it and would rather not throttle the usage if it isn't necessary. I was told that there was a memory leak problem in SP3, and the only reference that I could find was related to ODBC drivers and tables with many colums of data, which doesn't fit my situation. Does anyone have any insight on this?
Your help is appreciated.
Joyce
September 7, 2003 at 2:48 pm
Even if you had a memory leak, the only reason it should affect performance would be if you were doing a ton of disk swapping. Have you rebuilt indexes/statistics? What kind of server and what type performance were you using before? SQL using 1.7g isn't a bug, just means it has grabbed all that is available.
Andy
September 7, 2003 at 3:35 pm
Thank you for taking the time to help me. I've done a ton of reading, but I'm fairly new to SQL. Unfortunately, this is a new version of the application software and the previous versions used Pervasive on Netware servers, so I don't have any prior statistics on the software or the server. However, I've been told that they did not have any performance issues with the old version. One of my concerns is that the application may be poorly written and I am not knowledgable enough to tell when it's the application and when it's the SQL Server. If it's SQL, then it's my responsibility to correct the problem, but if the application is the problem then the vendor will have to correct the problem. I do know that the vendor did a large data migration when the new application was installed, but other than that I don't have much info.
I do understand that SQL uses as much RAM as is available and will not release it until the system requests it or the service is stopped. I should have asked, would throttling the amount of RAM SQL uses help my situation or make it worse? My instincts tell me that if I throttle the RAM, then the app will have to go to the HD more often and that will make the problem worse...is my thinking correct? As far as rebuilding indexes/statistics, I have been hesitant because of my lack of knowledge. Would you kindly guide me to best practices or point me to some similar reading about how to safely go about these tasks? Would compacting the DB help, or does that just return space? Your thoughts and assistance is greatly appreciated.
A most humble newbie -
Joyce
September 7, 2003 at 9:20 pm
Have a look at auto update statistics, auto create statistics SQL Server Books Online.
September 7, 2003 at 10:58 pm
Joyce,
We found on one of our servers (a dell Poweredge very similar to yours, dual Xeon and 2GB of RAM) that everything performed at its best when we restriced SQL to use 1GB of memory. This allowed the OS and other applications that run on the server to have their own chunk and not fight with SQL or swap to get enough memory to run. We played and tweeked to find the optimum memory for SQL and 1GB works fine. Just a sugestion.
Mark
September 8, 2003 at 1:38 am
Read this:
try to put the DATA and the LOG file of SQL Database on a partition that is Formatted with 32 K or 64 K Cluster Size (not the defalut = 4 K)
because the default Page size in SQL in memory is 8K ,
And If u use the defulat Cluster Size while formating partioin, then to get 1 page (8K), you need 2 hits for the HardDisk (4 K each)
But If u use 32 K Cluster Size while formatiing partioin, then on 1 hit for the HardDisk (32 K each) , you get 4 pages into Memory (more faster ... because the slowest thing that slow the whole system is to read/write from Hard Disk)
I use 32 K not 64 K.. because with 64 K you may have a lot of empty pages in memory ( lost in memeory and your DB may be graw more)
I hope this will help you to solve your problem
Alamir Mohamed
Alamir Mohamed
Alamir_mohamed@yahoo.com
September 8, 2003 at 2:28 am
Have you tried running the Stored Procedures using Query Analyser with show executing plan on. This should tell you if there is one part of the query which is eating up the resources. What size if the database ?
September 8, 2003 at 2:38 am
Joyce,
Two quick points, the first for any users. If you think you have a memory leak, you should log a call with Microsoft, although your situation doesm't sound like a memory leak.
The second, "Stored Procedures", if your application has been migrated from another DBMS, then it will not be using Stored Procedures (Sp's). You should get your vendor to look at creating and using some SP's as this will increase the speed of your application performance by huge percentages. I would do this before trying to get too fancy with other SQL options.
Declan
September 8, 2003 at 3:42 am
Just a tought but if you are migrating from a another DBMS are you shure you have all the indexes correctly set. (Or better said: do you have indexes?)
Otherwise if you r server is dedicated to SQLServer (i.e. no application is running on) you could limit the amount of memory used for SQL Server by Total Server memory - 256 MB like that:
sp_configure 'max server memory (MB)', 1700
reconfigure with override
Bye
Gabor
Bye
Gabor
September 8, 2003 at 4:12 am
The easiest way to rebuild the indexes and update stats at the same time is to run a maintenance job via the wizard. I agree with the above posting, if you migrated from a different platform you probably don't have stored procedures (which is ok, just not optimal, but they arent going to rewrite it for you) and you may not have much in the way of indexes. I'd start by making sure you have a primary key on every table, and if possible that every foreign key is indexed.
Andy
September 8, 2003 at 5:43 am
Based on your descriptions, it certainly sounds like indexes are not efficiently set. A suggestion to easily help define those indexes. Run the Profiler utility to capture all SQL Statements to a trace file for a period of time during normal loads. Then run the Index Tuning Wizard and feed it the trace file. It will check for your indexes and add or drop as needed. See the Index Tuning Wizard under "Creating and Maintaining Databases" in SQL Server Books Online for more info.
Mark
Mark
September 8, 2003 at 11:07 am
One other option that made my life MUCH easier: I had the opportunity to follow the concepts of spreading files around the server. In other words, my data partition is in its own JBOD with its own controller card and a lot of drives with RAID 5E, my log partition is in its own JBOD with its own controller card running RAID 1E, the TEMPDB is on its own drives with its own controller running RAID 0, and the system partition is on its own drives and controller running RAID 1. ALL of my indexes for my tables are built (by hand) with the WITH SORT_IN_TEMPDB option turned on. Once I expanded the system like that, my response time dropped like a rock and my users sang songs of worship to me around the campfire nightly. Once they got complacent again, I maxed out the RAM and processors in the machine, set the appropriate switches on the OS and in SQL2K, and the accolades resumed (for a couple weeks). Now, when they complain of speed problems, it's still dozens of times faster than their previous best speed.
September 9, 2003 at 5:21 am
Thanks to everyone for all the help. It seems to have been a stored procedure causing all the problems, and we are running smoothly now.
Thanks again,
Joyce
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply