The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time

  • I have a developer at a remote location who got the following error:

    Msg 1204, Level 19, State 4, Line 1

    The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions.

    Can we get the locks significantly increased? 2x's or 3x's? The error mentions memory, probably not the issue of the error, but increase as much as possible as well. I would like to get to 20 concurrent processes running.

    I increased the Memory but I'm leary about changing the locks option.

    Any thoughts, ideas or suggestions would be greatly appreciated.

    For better, quicker answers on T-SQL questions, click on the following...
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • Well, have you identified what is using up so many locks? start with:

    SELECT request_session_id, COUNT (*) num_locks

    FROM sys.dm_tran_locks

    GROUP BY request_session_id

    ORDER BY count (*) DESC

  • I haven't seen any posts, so I don't know if you have resolved this. But here is my recent experience.

    I was having the same error with a sql statement similar to this one:

    delete top(500000) t1

    from table1 t1

    left outer join table2 t2

    on t1.col11 = t2.col1

    where t2.col1 is null

    I did it this way because there were over 18M rows to delete, the tables were huge, and the tables were in use by online users. My thought was that if I kept the number of deletes at a minimum, the log file wouldn't grow much (this database is in simple log mode, it's a batch loaded database) and I wouldn't have issues with index locks.

    After deleting several million rows, I got the message 'The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time'. I reduced the 'top' value to '100000' and reran. It ran a little bit and died again.

    The lock options was set to '0', and every bit of memory on the server is given over to the database. So not much I could do at this point to configure.

    A little thought crept into my head that the system was probably trying to create a consistent cursor, so it may be locking all of the t1 rows that DID match. So I started to run sp_lock in a while loop, counting the number of locks, and found that even though I was only deleting 100,000 rows, several million row locks were being created.

    To fix this required some extra work. I selected the primary keys of the rows that needed to be deleted and put them into a temp table. This solved the issue of locking a bunch of rows that weren't going to be deleted. Then I deleted 50,000 of them at a time into a second temp table using the 'OUTPUT' clause on the delete statement. Then I joined temp2 to the original table for the delete and cleared temp2. I did this until the delete statement returned zero count. (yes .. I know the check for row count should probably be after the first delete statement, but it's a one time program...)

    I could do this because there was no risk of table2 somehow getting the 'missing' rows during this run. But it may be that if your remote developer is doing something similar, he may be able to come up with a similar solution of identifying all of the records to be processed first, then updating or deleting them in smaller batches using the primary key.

    Here is the SQL:

    begin

    set nocount on

    declare @rowsDeleted int,

    @batchSize int,

    @displayDate varchar(50),

    @totalRowsDeleted int,

    @toBeDeleted int

    set @batchSize = 50000

    set @totalRowsDeleted = 0

    set @rowsDeleted = @batchSize

    select @displayDate = CONVERT(varchar(50), getdate(), 120)

    declare @deleteTable table (

    col1 numeric(11,0) not null

    )

    declare @batchTable table (

    col1 numeric(11,0) not null

    )

    insert into @deleteTable(col1)

    select t1.col1

    from table1 t1

    left outer join table2 t2

    on t1.col1= t2.col1

    where t2.col1 is null

    select @toBeDeleted = COUNT(*)

    from @deleteTable

    raiserror('%s: Rows to be deleted: %d',0,10,@displayDate, @toBeDeleted) with nowait

    while @rowsDeleted = @batchSize

    begin

    delete from @batchTable

    delete top(@batchSize) @deleteTable

    output deleted.col1

    into @batchTable(col1)

    delete t2

    from @batchTable t1

    inner join table1 t2

    on t1.col1= t2.col1

    select @rowsDeleted = @@ROWCOUNT

    select @displayDate = CONVERT(varchar(50), getdate(), 120)

    set @totalRowsDeleted += @rowsDeleted

    raiserror('%s: Deleted: %d Total Deleted: %d',0,10,

    @displayDate, @rowsDeleted, @totalRowsDeleted) with nowait

    end

    end

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply