Preallocate locks a good idea?

  • We have engaged a consultant that is recommending we, "preallocate the locks table" for better performance. I think he's referring to sp_configure 'locks'.

    Starting with SQL Server 2012 BOL states this is deprecated and will be removed, so don't use it. We are mostly 2008 R2.

    This sounds to me like a very dangerous generalization to make across all our systems. It may help performance on a very well understood system, but my concerns are we don't have the DBA bandwidth to monitor this and tune it.

    My understanding is explicitly setting a value imposes a hard limit.

    I think the wisdom is partly drawn from this whitepaper on Biztalk, but that is a pretty narrow use case in my mind. https://msdn.microsoft.com/en-us/library/Bb743398(v=BTS.10).aspx

    Biztalk seems to emphasize single threaded performance as it's a gigundous state machine serializing application state into and out of a SQL database.

    What do you guys think about messing with locks?

  • Preallocate the what????

    If he's talking about the 'locks' server config setting, tell him to keep his hands off it unless he can explain exactly what it does (limit the maximum number of locks SQL can take) and why that's a good idea (it isn't in most cases)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Snippet from the Microsoft Biztalk whitepaper on lock preallocation. https://msdn.microsoft.com/en-us/library/Bb743398(v=BTS.10).aspx#BKMK_SQLSvrTuning

    Pre-allocating SQL Server memory has many advantages. The overhead of dynamic management, particularly under high stress growth, is avoided. Pre-allocated memory can be significantly less fragmented than memory that grows in small increments. In situations where multiple SQL services are installed (e.g., the SQL Server database engine running alongside Analysis Services, Reporting Services and/or Notification Services) it is essential to limit the amount of memory taken by SQL Server. There are three types of memory allocations important to BizTalk Server with SQL Server systems, locks, cache, and the SQL Server process working set.

    Locks

    Setting the lock table (rather than allowing it to grow dynamically) often improves performance more than any other memory pre-allocation. BizTalk Server systems commonly generate 500,000 or more lock requests per second. The number of requests increases considerably as additional BizTalk servers are deployed in a system. The default number of locks for SQL Server is 5000, which is adequate and/or easily grown for many types of applications. But the path from 5000 to the allocation required to support BizTalk Server-generated loads is not a pretty one, particularly considering the intensity with which the lock requests are made. BizTalk Server systems often experience excessive lock waits and timeouts as the database engine struggles to keep up with the demand.

    To avoid these issues, pre-allocate the lock table. This is not a risk-free strategy. SQL Server treats the pre-allocation as a fixed size rather than a base value from which to grow. As applications and load are added to the BizTalk Server system, the number of lock requests typically grows and the pre-allocation needs to be resized accordingly; this implies monitoring of the SQL Server lock performance counters (see Appendix A).

    Lock pre-allocation must be done before determining the cache size configuration, as it reduces available memory. Also, there is a significant difference between allocation in SQL Server 2000 and SQL Server 2005. In SQL Server 2000, the number of locks specified (using either the Enterprise Manager GUI or the sp_configure stored procedure) is the total allocated. In SQL Server 2005, to increase parallelism in the lock table, locks are allocated by NUMA (Non-Uniform Memory Architecture) node. When upgrading from SQL Server 2000 to SQL Server 2005, there is no warning of this change in SSMS (SQL Server Management Studio), the SQL Server 2005 Upgrade Advisor or the SQL Server installer, so it’s very possible that parameters which worked perfectly well in SQL Server 2000 will cause SQL Server 2005 to fail to start. The specified lock count value is allocated for each NUMA node. For example, if 1,000,000 locks are specified on a system with 3 NUMA nodes, 3,000,000 locks are actually allocated. The number of NUMA nodes recognized by SQL Server is determined using a special query.

    Example - Determining number of NUMA (Non-Uniform Memory Architecture) nodes on server for SQL Server 2005 lock allocation

    SELECT @NUMACount = COUNT(DISTINCT parent_node_id)

    FROM sys.dm_os_schedulers

    WHERE parent_node_id <> 32 --32 is the DAC scheduler

    Divide the number of locks desired by the number of NUMA nodes to determine the allocation value for sp_configure (or to be set using SSMS).

    To determine the total number of locks required, leave the lock allocation at zero (the default) and monitor the total number of requests for a large load representative of the expected load. Use the SQL Server Locks object counters in PerfMon to obtain this count. A good rule of thumb for BizTalk Server is to set the allocation at 15-20% above the peak number of lock requests; this leaves room for growth without allocating an excessive number of locks.

    Setting the lock allocation is an iterative process. The first preallocation often results in a significant performance improvement, which then increases the number of lock requests requiring recalibration and another, higher preallocation value. Also, the RECONFIGURE command, while normally applying changes to the SQL Server configuration immediately, does not work for changes to the number of locks (even though the command output indicates that reconfiguration was successfully completed). For the change to take effect, the SQL Server service must be restarted.

    This paper also has some misinformation, perpetuating the whole SQL engine creates a thread for each filegroup myth. With the rare exception being creating a file and zeroing out. Who isn't using Instant File Initialization these days?

    Creates multiple I/O queues to improve utilization of the SAN fabric; when combined with multiple filegroups (which create multiple I/O threads in the database engine…see below) can yield very large performance improvements

    and

    The second performance optimization is to spread the tables of the BizTalk Server databases across multiple filegroups. Each filegroup adds an I/O thread which, when coupled with a data file on a separate STORPort-driven SAN LUN (as described above) will greatly improve performance (gains from 100-1000% have been observed, depending on the load characteristics of the system). Not only is the table data moved into separate filegroups, but the LOB (TEXT/IMAGE) data and the non-clustered indexes as well.

    This whitepaper is dated 2007, and seems more geared towards SQL 2000 / 2005 era. Needless to say much has changed since then.

    We're also being told that we should use 4k NTFS units over 64k for TempDB and the log volume. TempDB makes no sense to me since 64K NTFS allocation units has been widely put forth as a best practice for data volumes. The log volume I sort of understand since it is sector aware (512 bytes in most cases). But the LOG_MANAGER limits were changed dramatically in SQL 2008 and grown by a factor of 8, so I think most log flushes would be batched out and the 64K makes more sense.

    This link seems to be a nice summation and states 64K for log volumes is a best practice. http://blogs.technet.com/b/magi/archive/2010/04/17/sql-server-unplugged-follow-up.aspx

  • To be honest, that whole section on locks looks questionable.

    The recommended practice for SQL Server is to leave that setting at default (https://msdn.microsoft.com/en-us/library/bb402936.aspx). It's checked by one of the built in policies if policy-based management is used.

    Unless you (or rather the consultant) has some data suggesting that lock allocations are a bottleneck (and it would probably be a memory-related wait, though I'm not sure which one), I'd recommend leaving it alone.

    As for the log flush size, that'll depend on the size of your transactions, since the log buffer has to be flushed to disk when any transaction commits. But the 64k is probably fine.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply