Transaction log full by rebuilding index

  • That was something I thought of too; ensure you CYA and tell the powers at be that in order to do the maintenance you may expose them to more data loss if something should go awry and your database go into suspect. Just some food for thought.

  • Rhox (8/5/2011)


    I will manualy change the recovery model to simpel and manualy rebuild the index on the best time (of the week).

    No. Don't switch to simple. It gives you nothing over bulk-logged and it increases the data loss risks.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Rhox (8/5/2011)


    crashdan (8/5/2011)


    You are up against a wall on this. You will need more storage to accomplish the task. Which will require an outage to move the log to.

    also: No downtime at all? Never? Not even Sunday at 3 AM?

    They don't want a downtime, it's an important news database...

    And I know that more diskspace will solve the problem, but that's a no go at this moment 🙁

    How bad is it fragmented? This could be an issue but it's not like you'll be getting a 500% boost in speed by doing this. AFAIK the max is around 43%-ish.

  • GilaMonster (8/5/2011)


    Ninja's_RGR'us (8/5/2011)


    Here I do tlog backups every 15 minutes and the reindex job used to run well over that untill I updated it. I checked the log history and at the 15 minute mark, during the reindex job, I have a very big tlog backup taken. So I don't think this is the issue here.

    Yes, you will, because the used portion of the log is large. However that log backup will not mark the log as reusable because there is an open transaction. The log can't be marked reusable past the beginning of the oldest open transaction.

    I wasn't clear about that. I meant that the whole batch of index was split into single transactions rather than just 1 big blob.

    GilaMonster (8/5/2011)


    6.6M pages = ±50GB. So any way you put it 40 GB is not enough room to run this (especially if you take the backups and put them on the same drive as the logs themselves... which is a big no-no anyways). BTW a conservative estimate for a rebuild would be 1.25 to 1.5X so assume 75 GB to rebuild.

    I'd say 100GB, the log has to reserve space for rollback as well, and that can be the same size as the roll-forward space usage.

    Thanks for the tip.

    Any merit on the reorg idea? You know those internals better than anyone else here... and I was just theorizing.

  • GilaMonster (8/5/2011)


    Rhox (8/5/2011)


    I will manualy change the recovery model to simpel and manualy rebuild the index on the best time (of the week).

    No. Don't switch to simple. It gives you nothing over bulk-logged and it increases the data loss risks.

    Fragmentation is 81%

    When I came here, there was 1 rebuild index maintenance plan (runs once a week) that always went in error.

    That's why I implemented the script to maintenance the indexes.

  • Rhox (8/5/2011)


    GilaMonster (8/5/2011)


    Rhox (8/5/2011)


    I will manualy change the recovery model to simpel and manualy rebuild the index on the best time (of the week).

    No. Don't switch to simple. It gives you nothing over bulk-logged and it increases the data loss risks.

    Fragmentation is 81%

    Ya, hard to ignore.

    Maybe something that could drive that point in would be to restore on a test server, run rebuild and compare the perf between the 2 servers.

    Maybe if the managers see a 30-50% speed increase and 10 - 50% read drops they would see the point in extending the san or adding a drive or whatknot.

  • Hey Rhox I'd love to do a few tests with your particular case. Can you send me the table's definition (ddl to recreate it) along with the indexes so that I can build a model to scale?

    TIA.

  • Ninja's_RGR'us (8/5/2011)


    Hey Rhox I'd love to do a few tests with your particular case. Can you send me the table's definition (ddl to recreate it) along with the indexes so that I can build a model to scale?

    TIA.

    Yes I would like to send it, but it's weekend now, I can't connect to network atm, so I will be ready on monday if it's good for you 😉

  • Rhox (8/5/2011)


    Ninja's_RGR'us (8/5/2011)


    Hey Rhox I'd love to do a few tests with your particular case. Can you send me the table's definition (ddl to recreate it) along with the indexes so that I can build a model to scale?

    TIA.

    Yes I would like to send it, but it's weekend now, I can't connect to network atm, so I will be ready on monday if it's good for you 😉

    It's alright, I'm running tests now. I got a 6GB table with 1 extra 400 mb index on it. Clustered index is 99% frag.

    I'm now trying to see if I can regorg and take log backups at the same time. I created the log file at 4GB with no auto grow. So hopefully my scale is ± 10:1 your system and close enough to your case to <dis>prove my theory.

  • Theory proven.

    6.1 GB clustered index 99% fragmented.

    4 GB log file, autogrow = off

    ALTER INDEX REORG

    DB IN FULL RECOVERY (might test bulk-logged later but not today)

    Doing CHECKPOINT just to make sure to write dirty page as often as possible, might be overkill

    generate new backup filename based on minute

    backup log

    waitfor delay 1 minute

    run as long as the reorg is running... and auto-stop

    Each log file has over 100 MB of data in them. Fragmentation is slowly going down while the data is accessible (slower, but available).

    Here's the backup script I use :

    SET IMPLICIT_TRANSACTIONS OFF

    GO

    IF @@TRANCOUNT > 0

    ROLLBACK

    GO

    DECLARE @Today DATETIME

    DECLARE @BackupTime DATETIME

    DECLARE @BkName AS VARCHAR(250)

    DECLARE @Cmd NVARCHAR(1000)

    WHILE

    EXISTS (

    SELECT [Spid] = session_Id

    , ecid

    , [Database] = DB_NAME(sp.dbid)

    , [User] = nt_username

    , [Status] = er.status

    , [Wait] = wait_type

    , [Individual Query] = SUBSTRING (qt.text,

    er.statement_start_offset/2,

    (CASE WHEN er.statement_end_offset = -1

    THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2

    ELSE er.statement_end_offset END -

    er.statement_start_offset)/2)

    ,[Parent Query] = qt.text

    , Program = program_name

    , Hostname

    , nt_domain

    , start_time

    FROM sys.dm_exec_requests er

    INNER JOIN sys.sysprocesses sp ON er.session_id = sp.spid

    CROSS APPLY sys.dm_exec_sql_text(er.sql_handle)as qt

    WHERE session_Id > 50 -- Ignore system spids.

    AND session_Id NOT IN (@@SPID) -- Ignore this current statement.

    AND qt.text LIKE '%ALTER INDEX %dbo.BigAssTbl%'

    -- ORDER BY 1, 2

    )

    BEGIN

    CHECKPOINT

    SET @Today = DATEADD(D , 0 , DATEDIFF(D , 0 , GETDATE()))

    SET @BackupTime = DATEADD(n , DATEDIFF(n , @Today , GETDATE()) , @Today)

    SET @BkName = '\Test_Reorg-TransactionLogs ' + CONVERT(VARCHAR(25) , @BackupTime , 120)

    SET @BkName = 'Q:\Backups SQL' + REPLACE(LEFT(@BkName , LEN(@BkName) - 3) , ':' , 'H') + '.bak'

    BACKUP LOG [Test_Reorg] TO DISK = @BkName WITH NOFORMAT, INIT, SKIP, NOREWIND, NOUNLOAD, STATS = 100, CHECKSUM

    WAITFOR DELAY '00:01:00'

    END

    The only thing that's left to do is check to make sure that the backup job is not already running.

    What I would actually do is put that backup code in a job. Then use start_job instead of the full script. It's easy enough to do a check if the job is already running and not restart it. That way you could use the current backup log script and make that code even simpler (only 1 code to change if the backups need to change location or whatknot)

    I'll let you hit F1 or google for that job status code!

    Have a great WE.

  • Ninja's_RGR'us (8/5/2011)


    I'm now trying to see if I can regorg and take log backups at the same time.

    Sure you can, and reorg runs as lots of little transactions, hence each log backup marks log space as reusable.

    Reorg however is not indicated for high levels of fragmentation. Rebuild rather in that case. Oh, and reorg is fully logged in all recovery models, so don't bother testing in other recovery models.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/5/2011)


    Ninja's_RGR'us (8/5/2011)


    I'm now trying to see if I can regorg and take log backups at the same time.

    Reorg however is not indicated for high levels of fragmentation. Rebuild rather in that case. Oh, and reorg is fully logged in all recovery models, so don't bother testing in other recovery models.

    Tx for the test avoidance.

    Absolutely agree on the rebuild part... except when you're out of room :-).

    At least this way the fragmentation will be gone... event if it has to take 3 days to run. Should be much, much less but I don't know his system.

  • Ninja's_RGR'us (8/5/2011)


    Absolutely agree on the rebuild part... except when you're out of room :-).

    Bulk-logged recovery model (unless there really are 0 dataloss requirements, no exceptions, which the OP has not come back on)

    At least this way the fragmentation will be gone... .

    Not necessarily all...

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/5/2011)


    Ninja's_RGR'us (8/5/2011)


    Absolutely agree on the rebuild part... except when you're out of room :-).

    Bulk-logged recovery model (unless there really are 0 dataloss requirements, no exceptions, which the OP has not come back on)

    At least this way the fragmentation will be gone... .

    Not necessarily all...

    Agreed but I went from 99% to 29% and still running.

    I would personally take that any day and I think most people would.

    At least this way he gets 50 to 99% of the perf. boost he could have hoped for.

  • Just a final report. It took roughly 1 hour for the reorg to run. Fragmentation is now under 0.4%.

    Like Gail said, not gone, but pretty darn close.

    L8rs.

Viewing 15 posts - 16 through 30 (of 58 total)

You must be logged in to reply to this topic. Login to reply