Prioritising server resources to a Stored Procedure ! How can ?

  • Hi All,

     

    I currently have a situation on my SQL box, which is around 2 Terabytes huge, 4GB of memory of which SQL only used 1.7 I believe (still don’t know why), and also a Quad Xenon CPU.

    I have my loaders which needs to be running at full potential all the time, I also have users running queries on the database, but what I found out was that the more users run queries/reports on the database, the more the loader slows down.

    What I want it go give more server resources to the loader and the report/query can wait.

    Will this be possible and what is the way forward ?

     


    Kindest Regards,

    John Burchel (Trainee Developer)

  • No you can't do that.  Memory is limited by o/s or sql edition. If you have std edition sql then you're at max memory, if you have advanced server/ent server o/s and ent sql server ( assuming all 32 bit sql2k ) then you can use extra memory ( 8Gb with w2k , 32gb with w2k3 ) check out AWE settings.

    You need to isolate where your bottleneck is, probably memory or i/o but possibly code and then work upon it. Adding memory is usually an easy performance boost.

    btw welcome to oltp vs dss hell!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I thought I'd offer some comments, though they may be obvious.

    If using EE, or SQL 2005, make sure that either the /3gb switch is in boot.ini, or AWE is enabled, or both. Also, the user under which the SQL process runs needs to have "lock pages in memory". I've seen mention of a /PAE switch, but we don't seem to need it for PAE to work on Win2003. 

    If you have a clustered index, make sure it has few columns (preferably no more than one or two). Be aware that clustered indexes cause bookmark lookups on searches, and slow inserts. Also every non-clustered index is much bigger (and slower to update) if your clustered index key is big, since every bookmark lookup requires the entire key. (I made this mistake, until recently...)

    Although I don't currently use this technique, I think loading data into a staging table and then doing an insert all-at-once might be a good approach in some cases.

    SET LOCK_TIMEOUT to a low value to prevent queries from completing if the table or key-range being accessed is locked for long...might help, then you can trap the error.

    It has occurred to me that if loading data requires fewer indexes or a different set than querying the data, ideally you would have two copies of the data you are loading, in two databases, on separate physical disks (or computers), and replicate from one to the other periodically, but I'm not sure the overhead of replication is worth it for that. Indexes typically cause the most overhead for inserts, I've found.

    SQL 2005 has SNAPSHOT ISOLATION, which could be of value, but it has some overhead so I've decided not to use it for now. It increases the size of your rows, and adds the overhead of keeping track of versions in tempdb. In a large data warehouse I would think the additional overhead of 12 bytes per row might be significant, if your rows are very small already.

    SQL 2005's partitioning might help, since the partitioned indexes would be smaller.

    It goes without saying that log should be on a separarate drive if possible (though ours isn't...just a single mirror), and ideally indexes would be too.

    Perhaps you could use exclusive table lock hints on INSERT (or UPDATE or DELETE) statements in the loading process, but I imagine that is a bad idea...

     

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply