SS2005 Not Utilizing Machine Fully

  • I'm running SS2005 Enterprise SP2. databases backs up to about 28gb and thats one 27.5GB DB and another smaller one I use to do nightly data validations.

    Ran this off a machine with not enough ram/HDs and had some performance issues so we got a new machine.

    Dell Intel Xeon X7350 2.93GHz, 4 Quad Core, 128GB RAM. 3 Logical Drives (6 physical). OS is Server 2003 Enterprise SP2.

    No doubt, it is faster. But by my thinking, table, index, view could be stored in memory. But I'm still seeing logical reads on pretty heavily used tables.

    The service has been running for a week of pretty heavily usage, yet the process is under a GB.

    This box is devoted to SQL Server. The only other usage it gets is me running perfmon or moving backups to/from another server. Even as far as the databases themselves go, this server has one production database and a small DBA database full of checks I make overnight on data.

    I guess, background being said, I'm looking to see how to make SS2005 make full use of the box. To grab 100GB of the ram as needed and not be releasing it like something else really needs to use the RAM.

    Is there settings I need to change in SS2005? Or on Server 2003? I recall seeing something about using Datacenter because the other Servers had a ram limit, but I read that was removed in SP2 for 2003.

  • i'm assuming you're running the x64 version of the OS and SQL

    how much is the workload? how many queries per day are handled?

  • Okay, left out that tiny but vital fact. Yes, both SS2005 and Server 2003 are 64bit.

    As far as queries per day... I'm not sure where to grab a precise number off hand, but I'm seeing 15k/day 3 weeks, then easily 50k/day the 4th week (users have a 4 week cycle to enter data and always wait to the last second).

    Speed could be improved these other 3 weeks, but it's this 4th week with all the reports and data entry that we really need to use this memory.

    Since I first posted, the memory footprint has increased to 1.2GB from 700MB. I would wager that more than 500MB of data has been accessed, but maybe some of that was already cached.

  • depends on the actual workload per query, but it sounds like you have too much machine

  • Yes, no doubt.

    Currently we have only a portion of our users on this system and the DB choked with on a single 8-core 16GB and both MDB and LDB on the same drive. Running reports with large footprints (which are common towards the end of the month) would just drive the db to a halt with disk io waits because it couldn't hold all the information in memory. Just constant thrashing that would bring a 1 minute report to 10 minutes because of io waits. The info was in memory, but got flushed due to memory pressure.

    If the DB is having memory issues on a machine with 128GB of ram, there is an issue that must be addressed before we can increase the user-load. Any performance issues will just be magnified.

  • Have to ask, any performance issues regarding queries and reports? Is your only concern that the system is not using all the memory available?

  • No bottlenecks right now, but because SQL keeps releasing resources reports are slower than they need to be.

    We just have years of history that is still reported on in various ways (some canned, many not) that is unchanging, there is no reason to keep loading GB's of data over the course of a day and then release it again only to have it load again the next day.

    I feel a difference in the system when the memory is released and this is only going to be worse when more users are on the system.

  • make sure the account under which SQL is running has lock pages in memory rights. in your sql server log you might be seeing errors about the memory being flushed.

    i would also add more disk to break up the I/O

  • I've got to say that with that much memory and cpu's, with only a 28-30GB database, if you are still having performance issues then your problems are either IO related and/or the queries are badly written.

    Logical reads are from memory - so, if you are seeing very high numbers then your queries are probably not optimized or you don't have the right indexes. Probably both.

    And finally, after spending all that money on this new hardware - providing only 6 drives is not going to allow you to expand. You are always going to be restrained with the limitations of the IO subsystem.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply