April 29, 2011 at 10:24 am
Same stored procedure with same execution plan runs sometimes way slower. On the production, when it is running faster I took a backup of the DB. On the development, same stored procedure same execution plan runs always way slower.
I restored the production backup on to development, I got the same execution time. The stored procedure is one of the most commonly used one from the application.
How can I assure that the stored procedure runs always fast?
I monitored performance counters on both servers.
Prod cache hit ratio: 99.74
Dev cache hit ratio: 99.84
Prod Page Life Expectancy:5,915
Dev Page Life Expectancy:234,071
Dev Available Physical Memory: 2Gb
Both Servers:
WIN2K3 Std SP2, SQL 2005 SP2 32bit, 8Gb RAM
Auto Update Statistics Off
No DDL changes during the monitoring period
DML changes in the table threshold limits
No Major Fragmentation (frag % between 0 and 0.05)
Production Avg Execution Time: 13ms
Development Avg Execution Time: 2161ms
April 29, 2011 at 10:35 am
Please post the actual execution plan for both servers (not estimated).
April 29, 2011 at 11:34 am
Do you do regular db reindex and update stats in DEV same as PROD? I also see huge difference in memory for your DEV (2GB) and PROD (8GB). Is that correct?
April 29, 2011 at 12:53 pm
Hi SSC-Enthusiastic,
I do reindex and update stats in both servers. [avg_fragmentation_in_percent] is for all indexes in DEV is less than 5%.
They both have the same 8 Gb RAM. Because of 32 bit, total memory is 4Gb.
2Gb is the available physical memory in DEV. I posted this value to point that there is no lack of physical available memory issue.
April 29, 2011 at 12:58 pm
Hi Ninja's_RGR'us,
I checked the actual plans they are line by line identical. For the indexes, some of them have %1 cost differences and that's the only difference. Sorry, I cannot post the plans since they are very long. Each plan has 2307 lines in XML format.
April 29, 2011 at 1:16 pm
To post the plans you can save it as a .sqlplan file and then upload it.
If you're seeing differences, then these are different plans. Without seeing the plans, it's hard to say what might be up. It could be a case of parameter sniffing. It might just be simple contention.... not sure.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 29, 2011 at 4:33 pm
Between Grant and I we must have studied over 10 000 plans. We can't possibly tell you all the things to check on this thread alone. We need to be able to dig in...
AS grant said. Just upload the file and we'll take care of the rest.
May 3, 2011 at 12:52 pm
Here is the plan.
May 3, 2011 at 4:16 pm
That's an estimated plan unfortunately, so there will be less data there.
You have tons of functions. I can't tell what's happening in here without seeing that code as well. It's a very complex query and I suspect you have multi-statement table valued functions at work in there, but it's hard to tell. Are the functions nested? How much do those functions manipulate data prior to returning it?
I'd focus on breaking this down into multiple statements and probably eliminate the functions, but I'm shooting in the dark without a good set of code.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 3, 2011 at 5:22 pm
I agree it is not easy without looking at the actual code.
All Scalar functions return just a value and all table functions return one single row with calculated values.
No table level data manipulation inside of this stored procedure neither in functions. All SELECT statements.
Only local variables get manipulated.
The logic gathers values from 4 views. 3 views has 3 unions and aggregate functions and tables have approximately 100,000 rows. The last view returns resultset with aggregate values from one table which has 2,000,000 rows (there is no update or delete statement calls this table).
Bulk insertions or updates are done twice a day related to this stored procedure. I regularly checked the indexes after those peak write periods. Usually, no fragmentation occurs during the week. All manipulations happening within the thresholds.
I monitored memory and IO. There is no memory issues during the peak hours.
These are the average perfmon stats during the day:
Paging File % Usage7.64
PhysicalDisk Avg Disk sec / Read0.00
PhysicalDisk Avg Disk sec / Write0.01
PhysicalDisk Disk Reads / sec6.88
PhysicalDisk Disk Writes / sec3.98
Processor % Processor Time2.54
SQLServer General Statistics User Connections14.45
SQLServer Memory Manager Memory Grants Pending0.00
SQLServer SQL Statistics Batch Requests sec1.91
SQLServer SQL Statistics SQL Compilations sec0.41
SQLServer SQL Statistics SQL Re-Compilations sec0.02
System Processor Queue Length0.05
I have not monitored yet whether the cache plan is changing or not during the day. I used tuning advisor no recommendation for indexes and statistics.
Is there anything else I should look at?
I would like to find out why slowness occurs randomly during the day?
May 4, 2011 at 10:11 am
There are so many bad things in that sproc I don't know where to start. But given what I see I imagine there are all KINDS of opportunities for improvement. For example:
SET @sPayNorm = (SELECT cashNorm FROM @tblsCashRefAll)
SET @sPayCorr = (SELECT cashCorr FROM @tblsCashRefAll)
SET @sall = (SELECT psAll FROM @tblsCashRefAll)
SET @sAllCorr = (SELECT psAllCorr FROM @tblsCashRefAll)
SET @sRef = (SELECT psRef FROM @tblsCashRefAll)
You just hit a table FIVE times when you only needed to hit it ONCE. All 5 variables can be set in a single SELECT statement.
Get a professional to help you refactor this mess, and mentor you on how to do things more efficiently. I bet there are lots of examples of low-hanging fruit like this throughout the application.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
May 4, 2011 at 12:04 pm
I noticed that one of the application causing many Adhoc compiled plans during the working hours.
I calculated the total sum(size_in_bytes) for Adhoc compiled plans (most of them usecounts=1) from sys.dm_exec_cached_plans and it is about 350Mb. The main reason is the application is sending non-parameterized queries therefore SQL Server creates every single time a new plan.
What would be the performance impact according to this behavior?
Can this would be the answer for why SQL Server slows down in some random time frame and then speeds up again?
Thanks,
May 4, 2011 at 1:56 pm
Yes but not because of missing ram. Most likely you hve a few plans that are bad in the bunch and slowing everything else down.
Things I'd check immediately... index frag, stats up to date.
Track the bad plans and see if they are correctly optimized. Make the there's no early terminaison of the optimizing (query plan, properties of the final select).
May 4, 2011 at 2:58 pm
Thank you very much for all your comments.
I agree application queries need to be optimized. Soon, we are going to replace this old application with a newer version by using wpf. That's the reason currently, we don't want to allocate some resources to fix the old code.
I will keep you posted if I can shed light on something.
May 4, 2011 at 4:39 pm
day-476284 (5/4/2011)
Thank you very much for all your comments.I agree application queries need to be optimized. Soon, we are going to replace this old application with a newer version by using wpf. That's the reason currently, we don't want to allocate some resources to fix the old code.
I will keep you posted if I can shed light on something.
A word of concern: if the same people that wrote this old application also did the new one, you are still in for trouble.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply