For the longest time, we rarely if ever got an error message about locking, however, another team that pushes data to our SQL Server changed their processes and this has caused issues where we keep getting issues about "The instance of SQL Server Database Engine cannot obtain a LOCK resource at this time."
Part of the problem we have noticed, is their process creates about 160 connections to push data, and we believe this is causing the majority of the locking errors.
But one thing we want to do is check the historical count of locks. To see the average number of locking errors prior to their change vs. the average number of locking errors since their change.
As such, how would one go about checking the historical locking errors? Unfortunately, I haven't found a way to do this and hoping someone could help.
March 3, 2022 at 7:09 pm
If the absence of locking errors is evidence, then perhaps just going through the SQL Error logs from before the change and after the change would be what you need.
The following link will take you to a decent article on how to query the error logs using T-SQL.
https://www.sqlshack.com/read-sql-server-error-logs-using-the-xp_readerrorlog-command/
I don't believe you'll find that counts of locks are kept in perpetuity or even a "substantially long enough period" unless you've put something in place to collect them, aggregate them, and store them prior to the need for such a thing.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 3, 2022 at 7:42 pm
Jeff:
Thank you for that. It provides some of the information, but I guess what is really needed is a way to tie those logs to a certain user / login. We know the login, but is there a way to find out "login xyx is causing / experiencing LOCK?" I see in the logs where is a "Source" with spid40s, but I believe these get recycled and not sure I can tie this back to a certain user?
There's no place with long term historical lock information that I know of (unless you've kept every transaction log backup). You can, however, start to collect current lock information. One friend of mine uses Extended Events but I caution against this method because it collects EVERYTHING for both reads and writes. You could modify it so that is does an aggregation of, say, an hourly collection table and dump the aggregates into a reporting table an clear the staging table. It also captures the "principle login name" and a few other things as well as being customizable. Of course, you don't need XE for this. You could write similar into a "per minute" job using only T-SQL.
Here's the link to the article from my friend. He uses the technique to discover which tables are being used and by whom... which sort-of fits what I believe you're requesting. It's not errors that are being collected... it's table usage for EVERY read and write.
https://spaghettidba.com/2022/03/01/recipe-7-finding-unused-tables/#comment-39326
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply