As a SQL Server DBA, usually the largest databases in size and in number of users that I had to manage was an ERP database, such as Microsoft Dynamics AX. And because these types of database typically have a lot of users, database blocking was a major concern. This would manifest itself via calls from users saying that they hit the save button on a screen and it has just been sitting there for ten minutes or more. Or they ran an ad-hoc query against the ERP database and it has been running for thirty minutes when it usually takes 2-3 minutes. This is a sure indication you have blocking going on.
When this happens, usually a quick look at the Activity Monitor in SQL Server will show if there is a bunch of blocking going on. If so, you can kill the long-running query that is causing most of the blocking, but that is just a temporary fix. How do you come up with a more permanent fix?
Well, there are a number of solutions to prevent blocking:
- Use the NOLOCK query hint when using a SELECT (more info). Many times I have seen users running poorly written queries that take way too long to run. Obviously I want to clean up the code, but I might not have the time just yet. Or the query is inside a stored procedure written by the vendor and I don’t want to tamper with the code. A quick fix is to use the NOLOCK query hint after each table in the SELECT. Just be aware that you can get inconsistent data because of dirty reads
- As mentioned above, cleaning up the query will result in locking fewer rows or shortening the length of the locks. Try to work with smaller sets, use better indexes, etc. Basically, try to reduce the time it takes to run the query so it holds locks for a shorter amount of time. On days that I dedicated to cleaning up queries, I would run SQL Server profiler and sort it by the queries that took the longest, then work on those
- You can change the default isolation level to read committed snapshot isolation (RCSI). RCSI causes SQL Server to maintain a copy of any record being modified in tempdb which will be used for reads if the record has a lock on it. This means that the read query does not have to wait for the release of the lock. But this can cause phantom reads or non-repeatable reads
- Make triggers asynchronous using SQL Server service broker to avoid the problem of a trigger making transactions last too long and therefore holding locks and blocking others. For example, an INSERT on a table fires a trigger, which does updates and inserts to other tables. All of these will be in one transaction, holding locks until it’s completed. If one of those updates or inserts takes a long time, there will still be locks on the other tables in the transaction that have nothing to do with the problem update or insert. But by using service broker, you avoid one big transaction and instead have a bunch of smaller ones, so if there is a problem table, it won’t cause blocking on other tables
- Use cache servers (i.e database caching) for data that does not change much. Data is in RAM and returned quickly, so obviously there is no locking since the tables are not touched. Some database caching will actually create static web pages after querying the database as a solution to reduce the hits on the database
- Read from secondaries: Use replication, log shipping, or a mirror db with a snapshot (requires SQL Server Enterprise version). I have used replication as a secondary with great success. At one job, when I first got there, there were frequent locking issues on the ERP system causing long waits and timeouts, because all user reports and queries where going against it. The first thing I did was replicate all the ERP tables to another server, and re-pointed all the reports and queries to this new “reporting” server. The long waits and timeouts all disappeared
More Info:
Kendra Little’s Five Ways to Fight Blocking Video