August 14, 2008 at 9:31 am
Problem: A new server is not using all processor cores when performing a batch import of reports data. It instead peaks on a specific server and any one given time, showing 13% (1/8) cpu usage. This has impact that I expected it to be faster because of more processors and more memory.
Server info : The server I am using has 8 core (i.e 2 X Quad Core Processors) and running SQL 2005 standard. 4 gig memory.
August 14, 2008 at 9:45 am
I would venture to say that this means the import process is not CPU bound but IO or MEMORY bound.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
August 14, 2008 at 9:49 am
Usually that's because some other resource is a bottleneck. It's fairly hard on modern servers to max out a lot of CPUs for long periods due to the much slower speed of disk and memory. (Not impossible, just not common). Also, not all processes can be paralleled by SQL, so it may be that one processor is active for a particular operation. Without knowing more about what's actually running and what its doing, it's hard to say whether that is the case
8 CPUs and 4GB memory seems an odd combo.
Ok, Do a test run of the most intensive of your imports and monitor the following (perfmon) during the import. Please post min, max and avg for each
Physical disk: avg sec/read
Physical disk: avg sec/write
Physical disk: % idle time
Physical disk: avg disk queue length
Memory: Pages/sec
Processor: Context switches/sec
SQL Buffer manager: Buffer cache hit ratio
SQL Buffer manager: Page life expectancy
SQL Buffer manager: Lazy writes/sec
SQL Buffer manager: checkpoint pages/sec
SQL Locks: Avg lock wait time
SQL Locks: Lock waits/sec
In addition, please check the sys.dm_exec_requests view while the import is running, see if there's a wait time and if so, what the wait type is
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 1, 2008 at 9:25 am
Hi,
I have encountered the same problem (SQL 2005, 64 bit 4 CPU's > 16 GB of memory), only 1 of the processors is pushed to the 100 % and the other 3 do nothing. When I add an other connection a seccond CPU is used etc. (when I stress the server that is). Now I have read somewhere that it is due to the fact that IO is limmited per CPU and that having more files per database (equel to the amount of CPU's) will solve this problem.
My Questions are:
1. Is it true?
2. How can I add files and make sure the data is spread among it?
Thanks,
Frank.
October 1, 2008 at 10:47 am
See this blog post, specifically the first section "SQL Server Uses One Thread Per Data File"
http://blogs.msdn.com/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 1, 2008 at 1:26 pm
I was on client side again today and the problem persist. The client DBA have not been able to assist and I got involved because I am responsible for the application. I will obtain the perfmon values you requested.
I read the post but since I am not an sql expert, I could figure out the solution out right.
October 1, 2008 at 1:28 pm
Just curios - can you elaborate on this statement "8 CPUs and 4GB memory seems an odd combo. "
October 1, 2008 at 1:51 pm
dithebe (10/1/2008)
Just curios - can you elaborate on this statement "8 CPUs and 4GB memory seems an odd combo. "
Normally SQL systems that I see have 2 or 4 GB memory for each core.
I'm not saying there's anything wrong, it's just not the common ratio that I see.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 2, 2008 at 4:05 am
Hi Gail,
Thanks for your response, very interesting article.
Which raises a question right away, is it of any influence on the CPU usage problem?
Thanks,
Frank.
October 2, 2008 at 7:50 am
frank.brouwer (10/2/2008)
Hi Gail,Thanks for your response, very interesting article.
Which raises a question right away, is it of any influence on the CPU usage problem?
Thanks,
Frank.
Is what of any influence? The number of files?
Probably not.
Possibly what you've got is one long-running, CPU-intensive process that isn't running in parallel and that's running the one CPU high, then other queries are faster and, while they're running on the other CPUs, they're not having as much impact.
Also check that the processor affinity is set for all CPUs (default)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 2, 2009 at 12:57 pm
We've noticed that only 1 of the 8 cores on our server is running and only 300 Mb of 32 Gb was being used during a particularly long running stored procedure on SQL Server 2005 Standard Edition (affinity set as default: for all processors) which called many other stored procedures. First time took 12:12 (mins:sec) and the second, when using system performance counters, was 12:24.
I've seen mention of 'The number of data files within a single filegroup should equal to the number of CPU cores.' in this MS technet article but the statement isn't backed up by anything. Anyone see any practical information on this?
Any other solutions?
September 2, 2009 at 1:16 pm
Terry Grignon (9/2/2009)
I've seen mention of 'The number of data files within a single filegroup should equal to the number of CPU cores.'
That advice is for TempDB and only for TempDB and the reason for multiple files is to alleviate contention on the allocation pages. It is not a recommendation for user databases.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 13, 2009 at 9:21 am
Hi Guys,
I'm having the similar issue. My server is a dev server and I'm the only user. It's a 2xquad core CPU 64-bit server with 16 GB RAM. I'm running an ETL for 3 months worth of data and the CPUs are not being used. Last month, I've run same ETL for one year worth of data and it was much quicker about (5-10 times faster) than three months now. The server has been ok otherwise, nothing too obvious i.e. disk queues nil, processor queues nil. The only issue I see is about 10K context switching. If I run SPs directly in SSMS, even then it stucks to only 13% CPU utilisation. Bizarre. Has the Windows update killed my server? Any ideas?
November 13, 2009 at 9:46 am
It is very hard to max out a modern multi-proc server. Generally, if the queries are well written, disk or memory bottlenecks are hit well before the CPU and so the CPU shows a low overall usage. Take a look at the waits that you're seeing on queries. They'll tell you where the current bottleneck is. I bet it's disk.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 6, 2010 at 8:15 pm
This is the closest thread I've found to some empirical test results that I thought might be worth sharing.
A client wanted to evaluate the possibility of virtualising SQL on VMWare VSphere 4. The ESX cluster is SAN connected with 4 machines each with 4x2 cores and 32Gb RAM. His test script for evaluation is basically CPU bound, using real world databases from the physical machine he wanted to retire.
The initial VM used 2 vCPUs and 8Gb RAM, running W2k3 64 bit enterprise and SQL 2005 Std 64bit.
Results were promising, script maxed out both vCPUs.
Next I doubled the vCPUs to 4, retaining the 8Gb RAM, to find the results were shockingly bad - 3 CPUs sat idle while one did all the work.
This made no sense. So I dropped it to 3 vCPUs and gave it 10Gb RAM (gotta love VMWare) - much better. All 3 cores sharing the load, best result yet.
Next, I maxed the memory I could allocate and ran a trial of 5 vCPUs with 18Gb RAM. Only 4 of the 5 cores shared the load, one - well, didn't quite sit idle, see below, but the SQL script didn't use it. Excellent result however.
Dropping again to 4 vCPUs, retaining 18Gb, all 4 cores shared the load still - yet the script ran slower than above.
Two more iterations reducing RAM ended up with 4 vCPUs and 14Gb RAM, allowing all 4 vCPUs to fire on the test script. I could have gone futher but as the RAM reduction also slowed things down, this was considered optimum for purposes.
So, one could assume - that even though SQL is not actually using all the extra RAM, not having enough of it prevents SQL from activating the CPU, despite SQL supposedly automatically setting processor affinity across them all.
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply