incremental data retrieval

  • i am having a strange problem while fetching incremental data from live transaction table.

    this table sufferes a huge insert commands.

    I am fetching incremental records with row_id

    query is like this:

    Declare @maxIDApp1 as bigint

    set @MaxIDApp1=(select isnull(max(Row_ID),0) from TerminalTrades)

    Insert into TerminalTrades (row_id,..Terminalid,Branchid,clientCode,symbol,symbolCode)

    select row_id,..mnmuser,mnmbranchid,mnmaccountid clientCode,.. ...

    from tradehist with(nolock)

    where row_id>@MaxIDApp1

    still sometime some records do not come in TerminalTrades table

    how it is possible.

    this is very simple command i m fetchind data based on the row_id

    kindly revert as i got stuck here and i do not understand why its behaving like this

    [font="Arial Narrow"]Failure is not the worst thing. The worst Thing is not to try[/font]
  • virender.singh (4/21/2011)


    i am having a strange problem while fetching incremental data from live transaction table.

    this table sufferes a huge insert commands.

    I am fetching incremental records with row_id

    query is like this:

    Declare @maxIDApp1 as bigint

    set @MaxIDApp1=(select isnull(max(Row_ID),0) from TerminalTrades)

    Insert into TerminalTrades (row_id,..Terminalid,Branchid,clientCode,symbol,symbolCode)

    select row_id,..mnmuser,mnmbranchid,mnmaccountid clientCode,.. ...

    from tradehist with(nolock)

    where row_id>@MaxIDApp1

    still sometime some records do not come in TerminalTrades table

    how it is possible.

    this is very simple command i m fetchind data based on the row_id

    kindly revert as i got stuck here and i do not understand why its behaving like this

    You are setting @MaxIDApp1 from TerminalTrades and then selecting data from TradeHist. I can't begin to guess what you are trying to do but this logic will result in undesired results (including not inserting anything).

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • how it is possible?

    Like this (it is working and tested demonstration):

    -- We could do the same "hole" in ID's by deleting some rows, but this is shorter:

    CREATE TABLE test_t(x INT IDENTITY(3,1)) -- start with 3, increment by 1

    INSERT INTO test_t DEFAULT VALUES -- inserts 3

    SELECT * FROM test_t -- prints 3

    -- you code executes, transferring ID "3" into TerminalTrades

    -- Someone inserts rows with id's to fill the "holes"

    set identity_insert test_t on

    insert into test_t(x) values(2)

    set identity_insert test_t off

    SELECT * FROM test_t -- prints 2,3

    -- Your code executes, but row with ID "2" will NEVER be fetched!

    Therefore, you cannot rely on indentity to detect updated or newly inserted rows. Instead, add a column of ROWVERSION datatype and use that.

    Read this: http://msdn.microsoft.com/en-us/library/ms182776.aspx

    Another scenario:

    You use NOLOCK hint, and ID is not identity. So you may read and transfer the "dirty" rows that are inserted in the original table within a transaction, but rolled back later e.g. because of some constraint failure. That way you pick-up the higher ID that really is in the original table, therefore jumping over the real rows that will be inserted later with lower id's. So, NOLOCK hint is really dangerous to be used here - get rid of it! If you are concerned of waiting on the locked rows, use "TOP x" to do inserts in smaller batches in a WHILE loop until @@ROWCOUNT is 0. And don't forget to "ORDER BY".

    _____________________________________________________
    Microsoft Certified Master: SQL Server 2008
    XDetails Addin - for SQL Developers
    blog.sqlxdetails.com - Transaction log myths
  • hi vedran,

    thank for you reply.

    yes i am agree with-nolock read dirty rows but this is confirmed that there will be no transaction rollback.

    so there is no harm in using no lock to avoid shared lock.

    and also there is only one process which inserting data into this table so there will be no missed identity and no insert with identity insert on.

    here i am asking what actually causing this problem?

    [font="Arial Narrow"]Failure is not the worst thing. The worst Thing is not to try[/font]
  • You didn't post the most important thing: definition of the table(s).

    Without that, don't expect a very precise answer.

    Also, you don't nedd explicit "ROLLBACK TRAN" in your code to have a transaction rollback.

    Each sql command that changes some data is in a transaction, created implicitly.

    You can see that in transaction log (begin tran and commit tran around your commands).

    That's because sql commands must be atomic (either succeed or not succeed. Cannot partially succeed).

    E.g. You update 100 rows with one UPDATE command, and not null or PK constraint fails on 94 row. At the beginning of UPDATE sql server places implicit BEGIN TRAN and at 94 row it makes Implicit rollback tran.

    So, yes, you surely have rollback and transactions although you didn't explicitly wrote that code.

    _____________________________________________________
    Microsoft Certified Master: SQL Server 2008
    XDetails Addin - for SQL Developers
    blog.sqlxdetails.com - Transaction log myths

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply