Could not continue scan with NOLOCK due to data movement

  • Dean Cochrane (3/17/2011)


    The problem is that you ARE getting a dirty read, and that's what is causing the error. You are correct, READ COMMITTED is the default isolation level, and this error does not occur at that level.

    If you must read and you are having trouble with concurrency (ie you get excessive blocking) you might want to consider READPAST - READPAST will, as it says, read past locked rows, however if other transactions are holding large numbers of locks this may not work for you.

    Dean, I agree with you that the dirty read is the source of the error, it's not the entirety of the problem. It's the fact that the index lookup is occuring and the keylookup fails when it returns to get the clustered row that it pulled the keys from in the first component.

    While I agree in general that read uncommitted is to be used cautiously at most, this is a combination of issues. Read committed, though, as you've mentioned, would help with the problem. It's not the only way to cure it though, and I'd need to doublecheck the read committed locking mechanism between NC index and clustered key lookup to see if that would actually be a permanent cure to the primary cause, data going AWOL between first read and second lookup.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • bob mazzo (3/17/2011)


    Hi Craig,

    It's a sql job that schedules the order_strategy table rebuild.

    So, every 5 minutes on a timer (or whatever this is scheduled at), something goes in and obliterates and rebuilds this table. There's no cause-effect logic, just a timer. Okay.

    i.e. order_strategy forces certain type of parent/child orders to "stick" together on the main order board. The parent is the "00000" record, and the sort_string allows child orders to be grouped with the parent (regardless of its ticket sort).

    Ticket Dependent Sort_String

    844986 000000 C 0.0999 WEB844985844986

    844987 844986 C 0.0999 WEB844985844986844987844987

    844988 844986 C 0.0999 WEB844985844986844987844988

    Yeah, saw this part but without seeing what it's building off of it's tough to offer a permanent solution. Why don't you build a view that does this, however, so you can just have it be accurate on each call?

    I'm looking at immediately changing the TRUNCATE to DELETE FROM Order_Strategy, and also wrapping the Delete around a transaction. Then changing up my main stored proc to have a Read Committed (i.e. the stored proc that reads from Orders and sends them back to an aspx page).

    Well, you'd wrap a transaction around the delete and new insert, and that would block anything using the table in the mid-term. How quickly does your process run? Is a block every 5 minutes acceptable? It's obviously more acceptable then an error, but is that really what you want as an end result?


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Craig Farrell (3/17/2011)


    ...and I'd need to doublecheck the read committed locking mechanism between NC index and clustered key lookup to see if that would actually be a permanent cure to the primary cause, data going AWOL between first read and second lookup.

    As I thought I remembered, READ COMMITTED against one index doesn't stop usage or modification via another.

    check this link out:

    http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/11/16/1345.aspx


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Hi Craig,

    First I would say YES - a very short term block every few minutes is acceptable. I can't imagine it would be blocked more than even 2 seconds.

    Secondly, in terms of the view, I actually never thought of that as a solution but would certainly entertain that idea.

    From what I understand (off the top of my head), a view just recreates a sql statement on-the-fly as needed. i.e. I include the view name in my main query as opposed to the Order_Strategy table itself.

    SOME BACKGROUND INFO:

    I hate to say it, but rebuilding this ridiculous Order_Strategy table is a bit convoluted. I first open a cursor based on the main Orders table, loop thru the cursor, calculate some sort strings based on live FX market data, then insert each record into the #temp table. It ends up being less than 50 records I believe.

    I wrote this proc to code around what is really a very bad sql server backend design. The original system was actually storing too many values back to the main Orders table. So any info related to a customer order (i.e. the calculated sort string, the color of the order on the main display, the last market rate, etc.) was ALL getting written back to the customer orders inside Sql triggers.

    I pulled a lot of the triggers out of the back end with the goal calculating relevant fields on-the-fly, as opposed to reading these fields from the record itself.

  • bob mazzo (3/17/2011)


    Hi Craig,

    First I would say YES - a very short term block every few minutes is acceptable. I can't imagine it would be blocked more than even 2 seconds.

    Then this is probably the way to go as a short term fix, but I would switch yourself to read serializable to make sure you didn't have concurrent modfiication problems.

    Secondly, in terms of the view, I actually never thought of that as a solution but would certainly entertain that idea.

    From what I understand (off the top of my head), a view just recreates a sql statement on-the-fly as needed. i.e. I include the view name in my main query as opposed to the Order_Strategy table itself.

    SOME BACKGROUND INFO:

    I hate to say it, but rebuilding this ridiculous Order_Strategy table is a bit convoluted. I first open a cursor based on the main Orders table, loop thru the cursor, calculate some sort strings based on live FX market data, then insert each record into the #temp table. It ends up being less than 50 records I believe.

    I wrote this proc to code around what is really a very bad sql server backend design. The original system was actually storing too many values back to the main Orders table. So any info related to a customer order (i.e. the calculated sort string, the color of the order on the main display, the last market rate, etc.) was ALL getting written back to the customer orders inside Sql triggers.

    I pulled a lot of the triggers out of the back end with the goal calculating relevant fields on-the-fly, as opposed to reading these fields from the record itself.

    Well, we might be able to help you with this in a long term solution for a dynamic result set instead of a constant rebuild, but that's up to you. For us to help we'd need to see the DDL of the base tables, some sample data (mock up whatever is private, just make sure associations still make sense), and the script you're using to build out the table.

    There's some T-SQL geniuses who haunt this place that consider challenges like that fun. 😀


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

Viewing 5 posts - 16 through 19 (of 19 total)

You must be logged in to reply to this topic. Login to reply