December 12, 2007 at 10:24 am
Hi
I have a SQL 2005 SP2 box on Server 2003 SP2. It is a HP DL 580 with 4 pysical dual core chips with hyper threading enabled. So 16 logical are presented to the OS. Over the last two months this has been fine. In the last two days SQL will now only use 1 maybe 2 logical cpu's when doing a select for example. I have checked the SQL, HP and Windows log all of which are clean of errors. In the SQL log it shows 16 available cpu's it just does not use them. The processor settings for the instance are correct as well so I am drawing a blank on this.
Does anyone have any ideas?
Thanks.
December 12, 2007 at 12:56 pm
By processor settings being correct, I assume that you mean the affinity settings are correct. Is max degree of parallelism (server properties, advanced) set to 0? Also bear in mind that just because a query is large, it doesn't mean that it can be split up to use all available processors.
December 13, 2007 at 2:21 am
Yes affinity is set correctly.
I have arrived in work today and the DBA has informed me it is now all working. Neither him or I have done anything to resolve this.
I have raised a support ticket with Microsoft so see what they find as well.
Your comment about querys not being able to use all cpu interests me, The main probelm was a deleted staement - deleting 17 million rows.
Thanks
December 13, 2007 at 2:44 am
I think there's a misunderstanding of how processors are used - btw I'd seriously advise you to disable hyperthreading - regardless of the maxdop setting sql will use all the procs, but generally most queries tend to use only 1 thread from one proc - so if you're using a 16 way box on your own your query will likely only use one thread ( proc ), unless it generates a parallel thread. It's only in a multiuser situation that all the procs will be used. use umsstats to see workers and threads.
I will give a warning that really large parallel plans will likely block themselves with HT enabled - this may appear to give uneven processor use.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
December 13, 2007 at 2:56 am
Ok I see your point.
I am no fan of Hyperthreading in general, it is only on because by default HP enable it and the performance of the server was great when we first had it. I am aware of the problmes with sharing L2 and L3 cache with logical processors.
I will disable it after the weekend (it's working so I dont want to upset it :)).
I will post back next week and let you know how the hyperthreading pans out and what if anything Microsoft find.
Thanks
January 2, 2008 at 3:16 am
Hi
It turned out to be the sql optimzer deciding not to use more than one CPU. Our DBA made a change by adding a hash join and this appears to force it to use all CPU's. At least this allows us to look more closely for the problem. The DB is 1.6 TB with about 17 million rows being deleted so Microsoft think that defragmentation is the root problem. The DB has been fine for months though so maybe we have just hit a limit on data or such like.
Thanks
January 2, 2008 at 10:11 am
It can often be both faster and more 'concurrent' to do large-scale deletes in smaller batches on some indexed key. You need a small enough batch size to ensure an index-seek type query plan. This should prevent a number of things including a) blowing up the tlog, b) overwhelming I/O (be sure to put in a WAITFOR to allow any blocked processes to run and also to let the I/O system catch up), c) severe (probably table) lock escalation.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
January 3, 2008 at 8:48 am
Hi
Thanks for everyones comments.
We have it working now and have taken on board all the tips.
Again thanks.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply