Execution plan problem for you SQL performance gurus

  • Hi everyone,

    I'm hoping some of the more experienced DBAs on here can chime in on this with some ideas or suggestions because this one has me stumped. Here is a summary of the problem (running SQL 2008 Sp2 Enterprise 64 bit). We are running Siebel CRM, all customer account and activity, order info etc is stored in our Siebel db.

    Recently our company (we will call this company A) acquired another company (company B, a much smaller company with far less data) and we are are importing their data into our Siebel db. We have a query that attempts to match an incoming appointment record to an existing customer. The query attempt to match on the customer name, phone number etc AND the company ID (in the case of our example A or B).

    If the query is executed for company B FIRST (very little data exists for company B) and then the query is executed for company A (lots of data), the second execution for company A takes around 5-10 minutes. If I clear the procedure cache and we run the query for company A FIRST it completes in 1 second and after that both queries run fine. So only when the query for company B runs first it apparently creates an inefficient execution plan that is being referenced by the query for company A.

    I have looked into possible setting up plan freezing but I would rather avoid having to maintain execution plans if there is another option. In a case like this where there is a drastic difference in the data volume between 2 company's or organizations with the data stored in the same db how does someone go about ensuring that the execution plan that is created by the first execution will be efficient for subsequent executions returning much larger data sets. Also, I should note that these are Prepared plan types, sql caches the parameter placeholders.

    Any help greatly appreciated. Thanks.

    [font="Courier New"]Michael Connolly
    [/font]

  • sqlharmony (9/12/2012)


    Hi everyone,

    I'm hoping some of the more experienced DBAs on here can chime in on this with some ideas or suggestions because this one has me stumped. Here is a summary of the problem (running SQL 2008 Sp2 Enterprise 64 bit). We are running Siebel CRM, all customer account and activity, order info etc is stored in our Siebel db.

    Recently our company (we will call this company A) acquired another company (company B, a much smaller company with far less data) and we are are importing their data into our Siebel db. We have a query that attempts to match an incoming appointment record to an existing customer. The query attempt to match on the customer name, phone number etc AND the company ID (in the case of our example A or B).

    If the query is executed for company B FIRST (very little data exists for company B) and then the query is executed for company A (lots of data), the second execution for company A takes around 5-10 minutes. If I clear the procedure cache and we run the query for company A FIRST it completes in 1 second and after that both queries run fine. So only when the query for company B runs first it apparently creates an inefficient execution plan that is being referenced by the query for company A.

    I have looked into possible setting up plan freezing but I would rather avoid having to maintain execution plans if there is another option. In a case like this where there is a drastic difference in the data volume between 2 company's or organizations with the data stored in the same db how does someone go about ensuring that the execution plan that is created by the first execution will be efficient for subsequent executions returning much larger data sets. Also, I should note that these are Prepared plan types, sql caches the parameter placeholders.

    Any help greatly appreciated. Thanks.

    This may be a good start!

  • You have parameter sniffing problems

    http://www.simple-talk.com/sql/t-sql-programming/parameter-sniffing/

    http://www.benjaminnevarez.com/2010/06/the-parameter-sniffing-problem/

    If you have ad-hoc or prepared SQL embedded in the app, it'll be a pain to fix. In stored procedures it's much simpler.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks for the response SSCrazy but this article does not really address our issue. We are not suffering from cache plan bloat because of many single use adhoc plans. We are actually seeing very consistent plan re-use count numbers which in general is great but in our case, the plan that is being re-used is inefficient for the second query execution because it was build based on the data of the first execution. Our cached plan type is Prepared so this solution addresses an entirely different problem.

    [font="Courier New"]Michael Connolly
    [/font]

  • Thanks Gail, its Siebel so everything is wrapped up in a nice cursor for execution using sp_cursorprepare. If it were stored procedures I could at least force a recompile every time the proc was called which would at least be a decent temporary solution while we determine a long term fix. So, how much of a pain is it to fix a parameter sniffing problem when the sql is executed this way? I don't even know where to begin intercepting the sql to try to influence the plan it uses or to force a recompile for just that query every time it runs (other than removing the cached plans for just that query every few seconds with a sql job)

    [font="Courier New"]Michael Connolly
    [/font]

  • Honestly, nearly impossible. You'd have to change the application, where the SQL is embedded. Somehow I doubt you have permission for that.

    May I suggest plan guides? Optimise for unknown hint should make everything run evenly. Maybe not great, but not this excellent/terrible that it is. Or, if you can identify the parameter values for the large company, optimise for that value

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • This [/url] link gives some info about plan guides.

  • Gail, I was reading about the different OPTIONS today related OPTION (OPTIMIZE FOR (@Var = xxx)), and in this case I do know the variable that I need to optimize for and the value is always constant so that option would work but I have no way of embedding the OPTION into the sql statement generated by Siebel. Same holds true for the OPTION (RECOMPILE) fix. Thanks for those 2 links btw, they were informative.

    So if plan guides is an option for us (where we have absolutely no control or access to the sql being executed) how would we go about implementing a solution like this? We would need to generate the xml version of the known good plan that was generated with the larger data set and create a plan guide from that xml file? I did some reading on this a week or 2 ago also and I had generated the good xml plan but was having difficulty understanding how to import this plan into the cache.

    Thanks SSCrazy for the link on plan guides. I have read other documents on this but not this one so I will give this a read.

    [font="Courier New"]Michael Connolly
    [/font]

  • sqlharmony (9/12/2012)


    Gail, I was reading about the different OPTIONS today related OPTION (OPTIMIZE FOR (@Var = xxx)), and in this case I do know the variable that I need to optimize for and the value is always constant so that option would work but I have no way of embedding the OPTION into the sql statement generated by Siebel. Same holds true for the OPTION (RECOMPILE) fix.

    That's why I suggested plan guides, that way you don't have to change the code, just add something in the database to catch the query and add hints to it as it executes

    We would need to generate the xml version of the known good plan that was generated with the larger data set and create a plan guide from that xml file? I did some reading on this a week or 2 ago also and I had generated the good xml plan but was having difficulty understanding how to import this plan into the cache.

    No, no, no, no. Not plan freezing. Plan guides.

    http://msdn.microsoft.com/en-us/library/ms190417.aspx

    http://msdn.microsoft.com/en-us/library/ms190417%28v=sql.90%29.aspx

    The only hint you need to add in the plan guide is the optimise for.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Ah, thanks Gail. That may be an option. I will see if I can make some progress with this approach and if I run into any issues I will jump back on here. Thanks.

    [font="Courier New"]Michael Connolly
    [/font]

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply