Adding RAM memory degrades performance?

  • Hello.

    We're working with NT4 SP6, SQL Server 7.0 SP2 in computers with 64MB RAM

    memory.

    After an upgrade of 64 Mb (128Mb total RAM memory) SQL Server

    performance seems to be worse when we have many SPs or query calls per

    second.

    If we stop and restart the SQL Server Service , SQL Server seems to

    restore and starts working with better performance.

    We've configured "dynamically configure memory", with a (0 min-127Mb

    max) range.

    What are we doing wrong?

    Thanks in advance.

    Josep.

  • Apologies if this sounds flippant, but you're trying to run a memory hungry database server on a machine which is totally inadequate. I know from past experience of running SQL6.5 on a 386DX processor and 32Mb RAM that it will work, but it won't work very well. With the price of RAM as it is, I assume this is because you are unable to use something like a 512Mb upgrade because of a hardware limitation.

    On a more constructive note, how have you got the priorities set on the NT Server process?

    Regards

    Simon

  • I'll have to agree with that point, 128m is way short of enough. It's cheap enough these days. Not to kick you around, but our standard end user machine at work has 256m! I use 512m on my workstations. Get them to buy the RAM!

    I'm not familiar with a problem with performance degrading with more RAM. Initially I suppose you might see something of a delay as SQL can cache more data & procedure plans, but this normally happens pretty quickly. I think a more likely explanation is that with 64m it had less to swap to disk, with 128m of stuff its got more to swap and you the drive access time is contributing to the delay. Just my guess.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Ok, we've got maximum priority to user applications, because we are running our applications in same server computer.

    We know that is better SQL Server has a dedicated computer, but we have only 2 clients per machine for everyone of our customers. And increasing RAM of every customer installation is too much expensive due to amount of installed customers.

    But, my question is why SQL Server is worse when we've doubled memory. Don't you think it should have, at least, the same performance?

    We're analyzing performance monitor and we're watching that "% processor privileged time" is usually long times greater than "% user time"

    What's the reason?

  • If should actually show better in most cases, but there are factors that can offset the increase. 1 is the type of RAM that was in the machine before and what you put in. If the new stuff is slower then all the pieces will run at the slower clock rate. Also it is rare but the BIOS could have miss-detected the new RAM and slowed it all down. Another thing is to make sure you page file is at least 2.5 times as large as the physical RAM as it may have a higher number of swaps since SQL will try to use the memory and when it is full paging occurrs (this sounds far fetched but memory works oddly sometimes). Finally look at what is going on with the memory under PerfMon (would be nice if you could compare with 64MB as and 128 to see if anything specific happens).

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

  • Processor time will increase if you have increased activity due to page swapping. Which goes back to more memory, more stuff to swap to disk. Shouldnt be hard to try to prove/disprove it.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Have you altered any of the parameters in the SQL setup regarding dynamic allocations of memory? Can you tell us the CPU in the machine? Have you actually tried changing the priority to background processes benefit? Are they SCSI or IDE disks? Are they fragmented? Do you have a caching controller? Is on-drive caching enabled on the drives? Can you try one machine with more memory to see what happens?

    If you've got a lot of customers, obviously you're not going to pay retail prices for RAM - about how many are there?

    Regards

    Simon

  • We haven't modified any parameters of default memory configuration. We've been changing parameters of dynamic memory for testing purposes in our testing machines, but there is only slight differences in results.

    The machines are Pentium III 450. We can't change the priority of the applications because we've got end user applications runnning in the server.

    The only one disk in every machine is IDE, but we are going to install some disk defragmenter to watch if there is a problem with fragments (on pagefile.sys, it could be).

    We've been tried a machine with the same hardware and OS configuration and installed originally with 128Mb memory. The performance showed in performance monitor and query times from our application are much more better.

    This is the reason why we thing SQL Server have some problems with dynamic memory configuration when you increase memory without reinstalling or changing some parameters.

    The memory cost isn't the only factor to consider for the memory upgrade. We should send a person to every customer (not in the same city) to add the memory.

    Thank you for your support, we don't yet got the answer, but we are getting a couple of ideas to test.

  • Found this out today and could be a possiblity to your issue. Some boards due not fully support some RAM types and mixing these in can be a factor. Also mixing RAM of two different types and revisions can cause the system not to utilize properly. Example you have 128 MB that is PC133mHz and insert an additional 128MB stick but it is PC66mHz which the board does not support. The board happens to recognize it but cannot interpret properly and is not able to use more than 128MB but is now being slowed down by the additional memory. The key is make sure the RAM is all consistant and supported, not mix and match, even if you have to rebuy all the RAM.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

  • The first thing I can advise is get buy more RAM =)

    The second advise is, dont give SQL Server all the memory on your box. Leave some for the OS and other applications. For NT4 I'd say leave 32 to 48MB to the OS. Although SQL Server manages its memory dynamically, if you are sure to use the memory anyway, dont let SQL Server grab it in the first place.

    How large is your data? If its really large (greater than 128MB probably), try making the file cache of the OS smaller. I already forgot how to do this on NT4 so youll have to some further researching if you dont already know it. Set its role as a network server and not a file server (something like that)

  • 32 to 48 is probably minimum, not a bad idea to allocate more if you can. I've had no problems with letting SQL have it all and releasing it as needed. On the box that uses AWE I set aside 512m for the OS since the mem size is fixed.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • I experienced similar problems but the symptoms were a little different. I had a bunch of NT4 servers a few years back using SQL 7 SP1 that would occassionally popup a message on the console saying that the server was running low on virtual memory.

    The only resolution we could find was to limit the amount of memory SQL used. Can't remember the limit we put on it but from memory (no pun intended) we left 48 MB for NT and any other processes on the server.

  • In perfmon have a look at the Granted Workspace counter in the SQL Server Memory Manager object.

    If you are running occasional queries which return a lot of rows, making room in RAM to process the query can take longer when you have more to page out. The workspace memory counter will spike to many MB when this happens.

    If you run some transactions while watching perfmon and profiler traces you should be able to get a feel for what is going on with system resources in response to which transactions. Capturing and replaying a profiler trace taken from a production machine while watching perfmon is good for a more detailed analysis.

    As a general rule, on machines with limited RAM I agree that it is useful to leave 32 to 64MB RAM for Windows.

  • Standard set of suggestions:

    -Run Profiler and Perfmon, benchline and log performance, before and after changes to the system.

    -More RAM, all identical sticks and the max speed your MB can handle (check for BIOS updates also)

    -Change SQL maximum memory allocation, give the OS some room to breathe

    -Fix swap file sizing to accommodate increased RAM, put the page file on a 2nd HD (eliminate disk contention between the data drive and swap file)

  • I think two items Josep mentioned in an update to the original post may have been overlooked.

    1. <snip> Ok, we've got maximum priority to user applications, because we are running our applications in same server computer.

    How much processor time is being used by the processes associated with the application? Is the application getting more processor time after the memory upgrade?

    2. <snip> We're analyzing performance monitor and we're watching that "% processor privileged time" is usually long times greater than "% user time" What's the reason?

    This is generally the time spent performing system services. These could be explicit requests from the applications, or implicit requests (e.g., paging).

    I would suggest looking at the application that is running along side SQL Server.

Viewing 15 posts - 1 through 15 (of 20 total)

You must be logged in to reply to this topic. Login to reply