tlog grows larger when SP is run from job than from QA

  • I have a stored procedure which truncates the data out of tables in one database and refreshes the tables with a select * from another database. When testing the stored procedure one of the larger tables caused the drive where the transaction log is located to fill up. I modified the procedure to load that table in batches instead of one large transaction. When I run the stored procedure in Query Analyzer it completes successfully and the log does not grow larger than the 10GB it is currently sized at, but when the stored procedure is run from a job it fails because the transaction log drive fills up and it grows to over 17GB. Why would the same stored procedure behave differently when run from a job than from Query Analyzer?

  • Hi Michelle!

    If you use QA you will probably have ran the pieces of scripts in bunches that is in seperate runs. This way makes the transaction smaller. If your job runs over the night maybe the others user are not on the database so you are free to use t-log.

    A piece of advice to use "truncate table TableName" clause because this action is not logged so lightweight to use. Also using batches is another good idea!

    Cheers

  • I am running the same thing in QA as I do in the job. I just execute the stored procedure, which does a truncate of each table and then a insert with select * from other database table. When I run the SP from QA it runs successfully and the log doesn't grow over the 10G that is currently allocated to it, but when the SP runs as a step of the job it fails because it fills the disk and the log grows to over 17GB. Why would the SP act differently in a job than from QA?

  • Can you monitor the open transactions level when executing from each QA and the Job? Could it be that when executing from the job the whole thing is considered as one big transaction like begin transaction.. execute my_sp_...commit transaction? Can you post how do you divide the insert by batches?

    Also, could you just backup one database and restore over the other instead of truncating all tables and inserting everything from another database?

    Regards,Yelena Varsha

  • I thought that it might be running the job as one big transaction too. I manually ran the job yesterday instead of letting it run from the schedule and it was successful. I'm thinking it may be the time I had it scheduled that was causing the problem and maybe other processing being done on the database at the same time. I have change the time the job is scheduled to run and will see if that fixes the problem. I cannot just do a backup/restore of the database as the database is only a subset of the tables from the other database.

  • The job runs successfully when I right click on it in Enterprise Manager and click start job, but it fails when it runs when scheduled. What could cause that? It doesn't make sense.

  • Try dropping and re-creating the stored procedure. It strange but sometimes it works. Don't forget to back-up permissions.

  • What Error is reported when it fails?


    * Noel

  • Thanks for all the replies. The job was failing with unable to extend log due to disk filling up. I think I have solved my problem though. I added a backup log dbname with truncate_only (database is in simple mode) and then a dbcc shrinkfile to shrink the log as the first two steps of the job. It finally ran successfully when scheduled, so I think that will solve the problem.

  • database in simple !?! ...  I hope your DR requirements are very relaxed

     


    * Noel

  • The database is in simple recovery mode for two reasons. The first is that this is a UAT environment, not production. The second is that the database tables are refreshed daily with the data from another database, so if we lost a backup we could just rerun the job to refresh the tables.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply