Export and Transaction Log

  • There is a job running that does an INSERT of all rows from a view into a table.

    The transaction log is growing out of control because of this job.

    If I had them create an SSIS job to EXPORT the data from a view to the table, would that have the same effect on the transaction log?

  • Job details would be very helpful in determining why the log is growing.

    Does it use a cursor and create individual insert statements for each record or does it insert a batch of records. Are there indexes on the table? How many records are in the table? how many records are in the transfer job?

    More information would be helpful.

    ______________________________________________________________________________________________
    Forum posting etiquette.[/url] Get your answers faster.

  • Carol Adams (4/5/2011)


    There is a job running that does an INSERT of all rows from a view into a table.

    The transaction log is growing out of control because of this job.

    If I had them create an SSIS job to EXPORT the data from a view to the table, would that have the same effect on the transaction log?

    What's the size of the transaction? meaning, how often are you commiting?

    Is database in full recovery mode? if Yes, backup tlog every 15 minutes or so.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • I am not actually responsible for the code of the job, just the server it self.

    The job does an INSERT into an empty table via SELECT * from a_view.

    The view is probably 4 million rows.

  • There are no begin and end transactions...it is simply

    An INSERT into an empty table via SELECT * from a_view.

    The view is probably 4 million rows.

    I am not actually responsible for the code of the job, just the server it self.

    I want to make recommendations no a better way. Some one suggested using BULK INSERT instead settng the size of the batch.

  • Hi!

    I'm guessing that you use full recovery model when the job is running, change recovery model to bulk logged and when it's done change back to full.

    This will reduce the size of the loggfile and speed up the performance

    You can do this with T-SQL and put it in the job.

    change to bulk logged

    USE master;

    ALTER DATABASE database_name SET RECOVERY BULK_LOGGED;

    and back to full

    USE master;

    ALTER DATABASE database_name SET RECOVERY FULL;

    Information about recovery model

    http://msdn.microsoft.com/en-us/library/ms189275.aspx

    Regards

    Vincent

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply