Transaction log full by rebuilding index

  • gorachandindia 30156 (8/10/2011)


    you know the exact fragmented index Ok great .

    Right now I can clarify.. what I mean rebuild index in chunk is one index at a time. 🙂

    But that 1 index at a time was just the big problem 🙂

  • Hey Ninja's_RGR'us,

    As promised, I would reply after our testing.

    I've test the index (exported the table and index) on my desktop here.

    And it worked well!

    Went from 83.82% to 0,01%.

    Tested the stopping and restarting too and this doesn't matter, very good news to start it into production!

    Now I've to wait till the cleanup of that table is done (they found an error in the rows and are removing -200milj records- them now, time by time...) afterwards I start the defrag 🙂

    I've made 4 jobs already:

    1- Start ALL automatic (enables hourly logging job of fragmentation, starts reorganize job, starts backup job)

    2- Logging fragmentation job

    3- Reorganize job

    4- Automatic backups job (with last logging fragmentation job, disabling logging fragmentation job)

    So I only have to start job1 and when done, everything is stopped.

    And if needed, I only have to stop job3.

    When it works well I can shedule it and have fine indexes 🙂

    I wish to thank you (and the others) for all the great info!

  • Happy to help, thanks for the feedback!

  • I've start using the script and check job's in production.

    But today I've received an error with the log backups...

    Seems the extra log backup (each minute while reorganizing index) came in conflict with the normal log backups (each 30min):

    Found in SQL server log:

    Message

    BACKUP failed to complete the command BACKUP LOG db_messagent. Check the backup application log for detailed messages.

    Message

    Error: 3041, Severity: 16, State: 1.

    Message

    BackupDiskFile::CreateMedia: Backup device 'E:\Microsoft SQL Server\Backup\dbname_backup_201109081100.trn' failed to create. Operating system error 32(error not found).

    Message

    Error: 18204, Severity: 16, State: 1.

    This stopped the automatic log backups while reorganizing index!

    If I didn't followed up this action, there could be a big problem with the logfile...

    How can I avoid this problem?

    Yesterday I didn't had this kind of problem...

  • I've created an extra step in the job...

    So when the logging step fails, I'll restart this step...

    But this isn't a good solution 🙁

  • Error 32: The process cannot access the file because it is being used by another process.

    My guess, the anti-virus exclusions aren't set correctly. Also write log backups to separate files, don't append.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I see.. the backup filename of this job is the same as the one from the backup job.

    So if I change the log backup filename of this script, this can't be any problem anymore?

  • My guess is that you have 2 logs with exactly the same name and the 2nd one failed (never had that error so only a guess to start looking).

    You might consider making 2 schedules in your job. 1 every 30 minutes and the other every 1-2 minutes. Then activate / deactivate the schedules based on what's going on.

    I've never fully automated this but in theory it would work.

  • It's indeed the same filename for the log backup... I will change this in the script.

    I already made an extra job step, when the log backups failed the next step will mail me and restart the log backups (next step = previous step) so this will avoid unplanned stops 🙂

  • Rhox (9/8/2011)


    It's indeed the same filename for the log backup... I will change this in the script.

    I already made an extra job step, when the log backups failed the next step will mail me and restart the log backups (next step = previous step) so this will avoid unplanned stops 🙂

    Sorry, I missed the 2nd page of posts!

    IIRC I was telling you to run a check before starting the job, that was exactly why I had this in mind.

    Also in my script I was going all the way down to seconds in my naming convention. That would help, but I think the only solution is to check for file existence first or doing a try catch.

    Or using 2 schedules at setting them active / unactive whenever the job starts & finishes.

  • Don't append log backups to a single file. It's hard to see what's there and damage to the file could result in losing all your backups.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (9/8/2011)


    Don't append log backups to a single file. It's hard to see what's there and damage to the file could result in losing all your backups.

    Where do you get append from this thread? I think he simply used the same code to generate the filename which is the root of the problem.

    It could also just flat out overwrite the "previous" backup and completely destroy the PIT restore option.

  • Ninja's_RGR'us (9/8/2011)


    GilaMonster (9/8/2011)


    Don't append log backups to a single file. It's hard to see what's there and damage to the file could result in losing all your backups.

    Where do you get append from this thread? I think he simply used the same code to generate the filename which is the root of the problem.

    I see.. the backup filename of this job is the same as the one from the backup job.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • No I don't append 🙂

    I will do a check exist...

Viewing 14 posts - 46 through 58 (of 58 total)

You must be logged in to reply to this topic. Login to reply