May 19, 2005 at 10:17 am
Is there a way to assign a SQL batch/connection/statement as a low priority task within SQL Server? I understand that under normal circumstances a low priority task may indirectly become a high priority (business speak) if it is causing blocking, but if I know I have a maintenance task on something that is not otherwise in use on the server that will take a long time to run can I reduce it's priority? I'd like to free the CPU to run the other (important) tasks while this occurs to prevent performance problems. It seems like it could technically be done since I believe BOL mentioned that the INDEXDEFRAG process is a low priority, but I don't know if that ability has been granted to us.
Thanks.
May 20, 2005 at 12:40 am
I don't know of any executionpriority you can set, but you can set deadlock-priority.
set deadlock_priority low -- I wish to be the victim in case of deadlock !
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
May 20, 2005 at 3:00 am
Like alzdba says, there is no way to give a 'process' lower priority in SQL Server. Where did you find the information that DBCC INDEXDEFRAG should be a low priority process? I did not find anything regarding that in BOL.
May 20, 2005 at 7:08 am
I think that you are referring to Online vs. Offline operations (From Books Online):
DBCC INDEXDEFRAG is an online operation. While this operation is running, the underlying table is available to users of the database.
DBCC DBREINDEX is an offline operation. While this operation is running, the underlying table is unavailable to users of the database.
AFAIK, there is no way to specify a particular user/spid has lower priority than another. This is possible with Windows processes, but not SQL Server users.
May 20, 2005 at 7:52 am
That was the answer I basically expected. And I can't find what I was reading that I thought mentioned it being low priority. It may not have been BOL and so may have been a misinterpretation on either my or the author's part. Whatever it was I was reading it was about maintenance plans and DBREINDEX vs INDEXDEFRAG and so could have been generalized and possibly misinterpreted by me. It was a year ago so it's hard to remember exactly.
Thanks everyone for your input.
May 22, 2005 at 12:32 pm
Ken Henderson covers this in "The Guru's Guide to SQL Server Stored Procedures, XML, and HTML". I've considered it and can see the benefit of it. Hopefully, SQL Server 2005 gives us some of this functionality out-of-the-box.
See Chapter 20, page 578, xp_setpriority (calls Windows API SetThreadPriority())
May 23, 2005 at 8:08 am
THANKS!!!!!!!!!!!
This was exactly what I was hoping to find! I got very lucky in that one of our new hires just happened to have that book. For everyone else, xp_setpriority is an extended stored procedure that the author (Ken Henderson) apparently wrote which calls the Windows API function SetThreadPriority to change thread priority for the current connection. The book comes with a CD-ROM containing the compiled DLL (and source code) that can be copied to you SQL Server BINN directory and registered on your instance. Since threads are apparently already running at their maximum allowed speed (of NORMAL) raising priority won't have a real world effect, but it does allow you to lower the priority of a connection's thread. The author warns heavily with good cause to use extreme care when using this since failure to return the thread back to normal priority when your batch finishes will cause other connections to run at low priority due to thread re-use and worker thread pooling specifically. Note that this must be cleaned up per batch, not simply prior to connection tear down. Although I did test EXECing dynamic SQL from within a QA batch and it doesn't appear to switch worker threads during that script's execution (I had about 10 dynamic SQL statements checking thread IDs within the single script). However, different runs of the same script from the same QA connection will report different worker thread IDs and thus the need to take great care to reset thread priority immediately when done.
Another concern to worry about (besides carefull error handling to ensure priority gets reset) is to consider the work being done. If it is low priority but creating locks that interfere with other normal priority tasks you have a problem on your hands. The author didn't mention that issue, but I figured I would while I was at it.
Something else the author points out is that if you have changed the server setting away from the default setting to use fiber mode ("lightweight pooling" = 1) then this technique can not be used.
Failure to return priority back to normal could be next to imposible to fix without shutting down SQL Server since you generally don't know what thread ID you were running under and even if you knew it you can't specifically request it. I'm planning on creating a log table and a single proc to set and reset priority so that whenever priority is changed the event and current thread Id get logged to the table. Then when programmer/dba error causes a thread to get left in a low priority you at least have the ability to find the thread id and just keep running batches until you are lucky enough to get assigned that thread id and consequently the ability to reset it. Assuming that whatever messed things up in the first place didn't cause a rollback of the log table.
For those curious, kpid in the following is the thread ID of the current connection:
SELECT kpid FROM master..sysprocesses WHERE spid=@@spid
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply