... and it’s probably NOT your fault.
I was talking to a CIO recently about her SQL Server performance, which was horrible. This was very modern server hardware, with plenty of RAM, CPU and SSDs, running SQL Server 2014 Enterprise.
The issue boiled down to the installation having been done with all Microsoft out-of-the-box defaults and zero post-install configuration. I find this is a pretty standard configuration when non-DBAs do SQL Server installs. There’s nothing wrong with that, but you may find you are robbing yourself of a LOT of free performance.
Most of the defaults in the SQL Server installer are set up to make sure the performance is acceptable on a very modest server. Until version 2016, the installer had not changed much since version 2005.
There are several instance level settings in SQL Server that need to be set in a way specific to your environment and the workload you are going to run. At a minimum, memory settings, backup compression, and MAXDOP all need to be considered.
The problem is that in a large number of organizations, the person doing the installation of SQL Server is a Developer or SysAdmin, running a script. Both are great at their specific jobs. However most of them do not know how “Cost Threshold for Parallelism” or “Optimize for Ad Hoc Workloads” affects performance.
Likewise, if you are running SQL Server installed from a script that came with a 3rd party software product, you very likely have the “safest” possible settings in place. They will generally work on a server with low CPU and memory. The defaults may not get you the best performance from that new 40 CPU server with 1 TB of RAM.
The top six things I change on every install are:
- Enable Instant File Initialization – I enable this on every new instance unless the InfoSec team forbids it. This is in the Installer from 2016 on.
- Enable Backup Compression – faster backups and smaller storage requirements. A checkbox in the Instance Properties dialog
- Set Max Server Memory – I leave some memory for the O/S that SQL can’t touch. This is also in the Instance Properties.
- Increase ERRORLOG retention – I set it to 25 and create a job to cycle the log weekly. Configurable from SSMS.
- Set Cost Threshold for Parallelism to minimum 25 – at the 5 default, many queries will go parallel that don’t need to. This is something to test, and software vendors may have specific guidance.
- Configure TempDB – add data files and trace flags for growth. More info in this Microsoft document. In 2016 and newer you can do this in the Installer.
I have a post-install SQL script that I run with additional items. This script is customized to each client as a base level, and the individual instances are modified as necessary (e.g. for Sharepoint, I set MAXDOP=1). This ensures a consistency across their SQL Server estate and that they are getting the best performance out of their hardware investment.
Back to the CIO…we made some of the above changes to the defaults, restarted the SQL Instance and BAM! The server had massive performance gains without changing any hardware or upgrading the version of SQL Server. This was accomplished in just a few hours of my time, which included baselining and post-change testing.
If you don’t have a Database Admin and your SQL Server is struggling to keep up, a poor default configuration may be the issue. It may not be, but it is worth looking into. Its even a good idea from time-to-time to have someone else come in and run a Health Check. Even the best DBA can miss things.