Are deadlocks more frequent in 2005 due to page level locks?

  • Won't that only happen if multiple CPUs (multiple running worker threads) are after the exact same data at the exact same time?

    Dave: Talking about the processor cache impact on performance is, quite frankly, getting silly. The order I've seen (and the order I give when talking about this) for fixing performance problems is

    1) Bad application (multiple, repeated calls to SQL for the same data, front-end filtering, front end joins)

    2) Bad code

    3) Bad indexing

    4) Bad database design

    5) Poor hardware.

    The hardware is the last thing that you look at after you've fixed everything else and the performance is still not up to spec.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • One good thing that came out of our phone call was an executable called cpu-z that reports CPU information, including L2 cache. It's found at http://www.cpuid.com/. He said 8kb is normal for today's processors and I said are you sure you don't mean 8mb. He corrected himself. Our production server used by their application has only 1024KB, but I'm still not sure if only 1MB is a bad thing. He had me run the following code against our server during overnight processing. I ran it every 30 minutes.

    with results(singleuse,reuse,total,total_size_in_bytes,single_use_bytes)as

    (

    select count(case when usecounts = 1 then 1 end),

    count(case when usecounts <> 1 then 1 end),

    count(usecounts),

    sum(cast(size_in_bytes as bigint)),

    sum(case usecounts when 1 then convert(bigint,size_in_bytes) else 0 end)

    from sys.dm_exec_cached_plans

    )

    select 'statistic' = 'total compiled plans', 'value' = convert(varchar(99),total) from results

    union

    select 'single use plans',convert(varchar(99),singleuse) from results

    union

    select 're-used plans',convert(varchar(99),reuse) from results

    union

    select 're-use(%)', convert(varchar(99),convert(dec(8,2),100.0*reuse/total)) from results

    union

    select 'total plan memory usage(MB)',convert(varchar(99),convert(dec(10,2),total_size_in_bytes/1024.0/1024.0)) from results

    union

    select 'memory usage by single use plans(MB)',convert(varchar(99),convert(dec(10,2),single_use_bytes/1024.0/1024.0)) from results

    union

    select 'plan memory waste(%)', convert(varchar(99),convert(dec(10,3),100.0*single_use_bytes/total_size_in_bytes)) from results

    He said re-use(%) is low and asked to see our L2 cache. How does this pertain to L2 cache?

    statistic value

    ------------------------------------ ---------------------------------------------------------------------------------------------------

    total compiled plans 3186

    single use plans 2211

    re-used plans 976

    re-use(%) 30.63

    total plan memory usage(MB) 291.37

    memory usage by single use plans(MB) 167.52

    plan memory waste(%) 57.513

    Warning: Null value is eliminated by an aggregate or other SET operation. [SQLSTATE 01003]

    Job 'Monitor Memory for Eagle' : Step 1, 'Check sys.dm_exec_cached_plans' : Began Executing 2009-01-21 22:30:00

    statistic value

    ------------------------------------ ---------------------------------------------------------------------------------------------------

    total compiled plans 22208

    single use plans 17878

    re-used plans 4330

    re-use(%) 19.50

    total plan memory usage(MB) 3990.61

    memory usage by single use plans(MB) 3151.73

    plan memory waste(%) 78.979

    Warning: Null value is eliminated by an aggregate or other SET operation. [SQLSTATE 01003]

    Job 'Monitor Memory for Eagle' : Step 1, 'Check sys.dm_exec_cached_plans' : Began Executing 2009-01-21 23:00:00

    statistic value

    ----------------------------------- ---------------------------------------------------------------------------------------------------

    total compiled plans 38822

    single use plans 37810

    re-used plans 1018

    re-use(%) 2.62

    total plan memory usage(MB) 4236.62

    memory usage by single use plans(MB) 3997.95

    plan memory waste(%) 94.367

    Warning: Null value is eliminated by an aggregate or other SET operation. [SQLSTATE 01003]

    Job 'Monitor Memor

  • DBADave (1/22/2009)


    He said re-use(%) is low and asked to see our L2 cache. How does this pertain to L2 cache?

    It doesn't. That's looking at details of the procedure cache, in memory. There is no SQL function that will show you the contents of a processor cache. There might (and I do say might) be a windows API call that will show that data

    Reuse is low. There are a lot of plans that aren't been reused. Most probably cause for that is adhoc SQL, where even a whitespace difference will prevent a plan from been reused, as will a constant with a different data type.

    Poor plan reuse is a sign of poor queries or an application that uses lots of adhoc SQL. It is not a sign of memory problems of any form.

    How much memory total on that server?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • 32GB of memory.

    They have a lot of dynamic code executed from their application servers. If I understand you correctly the dynamic code will cause low cache hits. Correct?

    I did a Google search and came across this article, which talks about L2 cache and SQL 2005. One quote says:

    "Exploit L2 Cache – SQL Server 2005 loves L2/L3 cache "

    Perhaps this is why the vendor believes SQL Server can be impacted by low L2 cache.

    http://download.microsoft.com/download/b/8/f/b8f13247-b992-4f0e-846e-8f89fcaac0bd/SQL_OLTP_BestPractices.ppt

  • DBADave (1/22/2009)


    32GB of memory.

    From those stats, SQL's using maybe 200MB of that for proc cache. Not much

    They have a lot of dynamic code executed from their application servers. If I understand you correctly the dynamic code will cause low cache hits.

    Depends how it's written but typically, yes.

    I did a Google search and came across this article, which talks about L2 cache and SQL 2005. One quote says:

    "Exploit L2 Cache – SQL Server 2005 loves L2/L3 cache "

    SQL loves memory in general, the more the better. Though, at this point, there's absolutely nothing indicating that memory is a problem, let alone processor cache.

    Let me be blunt. I do performance tuning for a living. I would never, ever, even consider the L1/L2/L3 caches as a possible cause of poor performance

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I agree about the L2 cache. However, he did send an MSDN blog link that does back what he is saying, although who is to say the author is correct. It wouldn't be the first time someone blogged about something incorrectly. But at least I know where he is coming from.

    http://blogs.msdn.com/sqlqueryprocessing/archive/2006/09/26/compilation-time-issues-in-oltp-applications-when-upgrading-to-sql-2005.aspx

  • DBADave (1/22/2009)


    although who is to say the author is correct.

    Connor's currently the program manager of the Query Optimisation team and, previously (iirc) was one of the lead developers of the Query Optimiser. If anyone knows how compilation works, it's him.

    Note on the L2 issue he's talking about older machines from around the year 2000 or so, ie ones that are around 8 years old. Also, this is just for the compilation process. As the article says... "The good news is that there's a way to make many of these applications perform a lot better than even the SQL 2000 level - use parameterized queries."

    The other question is, of course, are you seeing high compilation times? If not, then all this discussion is moot.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Good to know who he is. Thanks. I'll keep that in mind when I read other blogs/white papers he has authored.

    I'll check the compilations. The vendor just sent an email indicating he would like to try Forced Parameterization in our QA environment based upon the results of the following query.

    select count(*) from sys.dm_Exec_Cached_plans

    where cacheobjtype = 'Compiled Plan'

    and objtype = 'Adhoc'

    -----------

    74225

    select count(*) from sys.dm_Exec_Cached_plans

    where cacheobjtype = 'Compiled Plan'

    and objtype = 'Prepared'

    -----------

    4899

  • Ummmm... forced parameterisation's not a band-aid. It can hurt more than help. It's used when you don't have access to the source code and hence can't change the queries in the app to parameterised. Since the vendor's the one proposing this, he does have access to the source.

    From BoL (emphasis mine):

    Forced parameterization may improve the performance of certain databases by reducing the frequency of query compilations and recompilations. Databases that may benefit from forced parameterization are generally those that experience high volumes of concurrent queries from sources such as point-of-sale applications.

    Forced parameterization, in effect, changes the literal constants in a query to parameters when compiling a query. Therefore, the query optimizer might choose suboptimal plans for queries. In particular, the query optimizer is less likely to match the query to an indexed view or an index on a computed column. It may also choose suboptimal plans for queries posed on partitioned tables and distributed partitioned views. Forced parameterization should not be used for environments that rely heavily on indexed views and indexes on computed columns. Generally, the PARAMETERIZATION FORCED option should only be used by experienced database administrators after determining that doing this does not adversely affect performance.

    What's the min/max/avg values for compiles/sec and recompiles/sec (perfmon) over the hours that this app is in use?

    Is this an OLTP system that experiences high volumes of concurrent queries?

    This is really looking like a long, drawn out attempt to pint the problems on SQL and to avoid having to change the app.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Gail. I do appreciate your help.

    I've been running SQL Compilations/sec, but not recompiles. I added it to the trace.

    Over a 16 hour period the following values were reported.

    Min: 1.200

    Max: 254.252

    Avg: 20.742

    I believe the average is ok. Isn't the general rule < 100 is ok. But I also have documentation stating the value should be < 10% of Batch Requests/sec so I just added Batch Requests/sec to the perfmon trace.

    It certainly appears the vendor is pointing away from their code. We will be running tests tonight that, if successful, will point back to their code. I hope we find a fix soon. 7 days/week, 2 - 4 hours of sleep a night is getting tough to handle.

  • I would not want such a high Recompilation in my DB. That would mean locking the Cache to recompile a SP. Less locking of cache to recompile the better. Thats just my opnion.:hehe:

    -Roy

  • I'd want counts one those numbers too. Is the 254 an extreme outlier or just slightly high? I haven't run averages on my systems recently, but 20ms just for recompiles seems a bit high for an average.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Unfortunately I have not worked much with the compilation counters so I don't know how to measure good or poor performance. I found three blogs indicating a value of > 100 is bad, but each blog had the identical quote so its possible the "> 100" practice is a bit too high. I ran more counters for our overnight batch and here are the results.

    Batch Requests/Sec 212.399

    SQL Comilations/Sec 26.297

    SQL Re-compilations/Sec 1.055

    The vendor calls their system a data warehouse, but it is not OLAP by any stretch of the imagination. I've not worked with Enterprise data warehouse's before (non-OLAP), but from what I have read this particular system has many characteristics of one, but in my opinion is more of an OLTP design that someone decided to call a warehouse/data store. Can you educate me on the meaning of the above counter results?

    Thanks again for your assistance.

    Dave

  • This isn't totally related, but is this one of those applications designed to run on multiple database platforms?

  • You got it. It's also designed for UNIX/ORACLE. Not sure about SYBASE.

Viewing 15 posts - 16 through 30 (of 33 total)

You must be logged in to reply to this topic. Login to reply