Updates which don''t participate in a transaction.

  • How do I update a particular table within a transaction so that it is exluded from the transaction? I need to write to a log table but don't want the trace to disappear if the transaction is rolled back. The log events can be invaluable for finding the steps which lead to the rollback.

  • In SQL Server, you can use savepoints to rollback to that savepoint and then just have the logging functionality based on the error code.

    Oracle provides something called as "pragma autonoumous transactions" using which one can start a pragma transaction from within another transaction and it's commit/rollback scope becomes different than the outside transaction's commit/rollback but SS2k doesn't have anything like this. I haven't checked SQL Server 2005 to verify whether there is anything like this available in it.

  • I was hoping that there might be some relevant OPTION variant that I had missed.

    I figure one way of doing this is to write a comma delimited file (or perhaps the application log) and then re-read it at the end of the job. The results need to be in a SQL table so that they can be accessed from another machine where the results are displayed.

    Thanks for the reply

  • Another nasty way of doing it is to run xp_cmdshell to start a copy of OSQL from the command prompt - this will make its own fresh connection to SQL and can run any code you want in its own transaction...  You can use something like

    OSQL -E -S [myServer] -d [myDB] -Q "insert into log(..) values(...)"

    It's not as nice as the suggestion that Oracle offers, but it will get the data directly into a SQL table for you as a separate transaction without needing to clog and read event logs on the server...  Other ways could be to run some sort of DTS package at certain times from your code or to use the sp_OACreate + related procs to create some COM object which does the logging - again yuk!

  • It creates deadlock.

    OSQL statement will wait for transaction to commit to read the data created within this transaction, and transaction will wait for xp_cmdshell to commit to proceed.

    _____________
    Code for TallyGenerator

  • How about building up an insert/update statement as a string then executing it after the rollback. Something like this

    BEGIN TRANSACTION

    Insert into VeryImportantTable (...) values (...)

    IF @@Error!=0

    BEGIN

    DECLARE @sSQL VARCHAR(1000)

    SET @sSQL = 'INSERT INTO ErrorLog (Message) VALUES (''Somethng went wrong with insert into VeryImportantTable'')'

    ROLLBACK TRANSACTION

    EXECUTE (@sSQL)

    END

    ELSE

     COMMIT TRANSACTION

     

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Try Using a Table Variable:

    Transactions that involve table variables last only for the duration of an update on the table variable. Therefore, table variables require less locking and logging resources. Because table variables have limited scope and are not part of the persistent database, transaction rollbacks do not affect them.

    http://support.microsoft.com/default.aspx?scid=kb;en-us;305977

     

     

     

  • You may can use ADO to insert into the log and bypass the transaction:

    1. Use sp_OACreate 'ADODB.Connection' to create a connection.

    2. The use sp_OAMethod to "Execute" your SQL Insert Command.

  • I think Ríchard has the solution. Using a tablevariable as intermediate cache within the transaction seems to be the way to go.

    It works for a scenario like this: assume we have a procedure that does three steps of work within a transaction. After each step, some loginformation is to be written into a permanent logtable.

    This simple demo does two successful steps, and then does a rollback before the third.

    The expected end result is that the transaction is rolled back, but the logtable contains records upto the

    last successful step.

    -- create a worktable and a logtable

    create table dbo.test1 (id int not null, someVal varchar(10) not null)

    go

    create table dbo.log1 ( errorTxt varchar(25) not null)

    go

    -- the example proc

    create proc dbo.iTest1

    as

    set nocount on

    BEGIN TRAN

    declare @tmplog table

    ( logtext varchar(25) not null ) 

    -- do some work

    insert dbo.test1 select 1, 'step1'

    -- log work done

    insert @tmplog select 'step1'

    -- do some more work

    insert dbo.test1 select 1, 'step2'

    -- log work done

    insert @tmplog select 'step2'

    -- do yet more work

    insert dbo.test1 select 1, 'step3'

    -- error happens!

    goto errhandler

    -- log work done

    insert @tmplog select 'step3'

    -- when all is done, flush @tmplog to permanent logtable and commit

    insert dbo.log1 select * from @tmplog

    COMMIT TRAN

    return

    errhandler:

    if ( @@trancount > 0 ) ROLLBACK

    -- tran rolled back, flush cached log to permanent logtable

    insert dbo.log1 select * from @tmplog

    return

    go

    -- start work

    exec dbo.itest1

    go

    -- check that no work was done, but there are logrecords

    select * from test1

    select * from log1

    go

    id          someVal   

    ----------- ----------

    (0 row(s) affected)

    errorTxt                 

    -------------------------

    step1

    step2

    (2 row(s) affected)

    drop table dbo.log1, dbo.test1

    go

    /Kenneth

     

  • Thanks All,

    Lots of good ideas here.

  • So long as just log writes were being done in the separate connection, no blocking should occur running xp_cmdshell or ADO connections, etc...  In any case, I hadn't thought of the table variable (or the other idea about local variable storing a string) - nice thinking

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply