February 9, 2009 at 5:37 pm
We have a dual Xeon box, with a 500GB and a 1TB array attached (16GB RAM) . During a datawarehouse process (some parts import large files to the local sql server; some parts bulk import the file into sql 2005; some parts are CPU bound) we find that the CPU can easily be pegged at 100% for hours. For part of that time the avergae disk queue lengths are near zero. The box runs sql 2005 (x64) which shows it has about 8.7GB free RAM. Task maager (as does perfmon) shows the CPU is pegged at about 100% for the process sqlserver.exe.
What tools can I use to gain greater insight into what is happening? And if it is strictly a CPU problem how can one even begin to guess the appropriate CPU power needed?
TIA,
Barkingdog
February 9, 2009 at 10:34 pm
What is the maximum memory setting? You might increase that, and then look at the ETL queries and see if they can be indexed better. Check the missing index DMV, which can help you determine if you are missing indexes.
February 10, 2009 at 6:09 am
This might seem a little strange, but try lowering the MAXDOP value to half the number of cores on the system (I'm not sure how many cores you have on the Xeons).
I've experienced this in the past, and lowering the MAXDOP value dropped a 25 hour long process to under 4 hours.
Is CXPACKET your largest wait type?
February 10, 2009 at 7:00 pm
The box is a ProLiant DL 380 G5 with 16GB of RAM. The OS is Windows 2003, Std x64, R2. Sql is sql 2005, x64 with SP2. It has a single quad-core Xeon EM64T, 3GHZ CPU. When I checked this server under Task Manager it typically shows about 8.9GB of RAM free. That looks good. At the prompting of this thread I checked tha RAM available to sql server (under Management Studio right-clicked the server name, properties, memory). To my surprise the max server memory is set to 6 (that's six) GB! What! I will up that to at least 12gb.
I also noticed that the "Use AWE to Allocate Memory" box was selected. Should I de-select it or is this setting irrelevant with this hardware config?
BTW, I have heard about CXPacket but I don't know exactly what it is. Still, the longest running query appeared to be generated by our datawarehouse developers. Also, from what I have read MAXDOP is set on a per query basis so even knowing where to start would be a challenege.
TIA,
Barkingdog
February 11, 2009 at 5:46 am
I am seeing the same behaviour with my 64bit version of 2005 as regards the AWE piece, so I don't think that's a problem.
You can configure the MAXDOP value on the instance itself
exec sp_configure 'show advanced options', 1
reconfigure
--view all of the options, you are looking for "max degreee of parallelism"
exec sp_configure
--example of the values
--name, min, max, config_value, run_value
max degree of parallelism064 4 4
--change by updating the value, this is instance wide
--used 2 in your case
exec sp_configure 'max degree of parallelism', 2
reconfigure
If you can, throw another CPU in there...it's a pretty good box from the standpoint of memory, ratcheting up the CPUs wouldn't hurt for sure. Try the MAXDOP first though.
February 11, 2009 at 9:45 am
I think RAM usage may be a red herring here. Disk IO is not problematic per the OP. I would check page life expectancy in perf mon just to make sure, and avg disk sec/read and avg disk sec/write too. Of course it shouldn't hurt to increase RAM to 12GB either with 16 on the box, assuming you don't have lots of other stuff running on the box.
Some questions:
1) are you certain it is SQL Server using the CPU?
2) Is SSIS running on this box and if so how much mem/cpu is it using?
3) have you actually monitored queries and sql server activity during high-cpu times to see what is going on that could be pegging the cpu? Could be you have lots of bad CLR stuff, bad UDFs, cursors, missing indexes, poorly written queries, etc.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 11, 2009 at 10:01 pm
After increasing the RAM available to sql server ( to 12 GB) today I have not seen the CPU percentage rise to over 50% for sql server. Much better.
The problem I have is that the actual sql running was generated by a third party product used by our datawarehouse team. This tool does not use stored procs. After 3 hours of copying a file from a legacy system to sql server most of the remaining 9 hours is spent importing the file into sql tables.
Profiler shows long stretches like "cursor fetch_API" but that sure isn't code we wrote. With such unrecognizable sql it's difficult to get meaningful results from profiler that we can do something about.
Thanks to all who replied.
Barkingdog
February 16, 2009 at 2:18 am
you can extract the running queries from the dmvs regardless of who wrote it or where it is. I'd suggest you get the sql performance dashboard from connect, it's some procs for msdb and a set of reports which will save you querying the dmvs yourself!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 16, 2009 at 11:57 am
"GrumpyOLDDBA" wrote
>>> I'd suggest you get the sql performance dashboard from connect,
What is connect? Can you provide a URL?
Barkingdog
February 16, 2009 at 12:38 pm
Barkingdog (2/16/2009)
"GrumpyOLDDBA" wrote>>> I'd suggest you get the sql performance dashboard from connect,
What is connect? Can you provide a URL?
Barkingdog
I think he meant http://www.codeplex.com, although the sql server performance dashboard isn't there. (you SHOULD check out that site tho - amazing amount of resources there for all types of things)
search download.microsoft.com to find what you seek
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 24, 2009 at 11:16 am
>>The problem I have is that the actual sql running was generated by a third party product used by our datawarehouse team. This tool does not use stored procs. >> Powerhouse?
I've seen similar on other systems. DW tools are often CPU hungry.
>> After 3 hours of copying a file from a legacy system to sql server most of the remaining 9 hours is spent importing the file into sql tables. >>>
Having a "legacy" DBA background (Rdb on VMS), I would encourage you to see if you can get the data directly out of the source database, then you can save the intermediate file creation and population part. Otherwise you could see if you can use parallelism to load the tables.
Good Luck
Howard
February 24, 2009 at 4:11 pm
Do you have the 'Lock Pages in Memory' permission granted to the SQL service account ?
That can be a known performance problem on 64 bit SQL2005, if the OS is paging out part of SQL's working set. If this is happening, it would show up in the SQL log, something like "A significant part of sql server process memory has been paged out. This may result in a performance degradation. "
February 24, 2009 at 8:23 pm
I have read that "lock pages in memory" only applies to 64-bit Enterprise editions of sql. Ours is Sql Standard so it does not apply to us.
Barkingdog
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply