Truncate logfile on a specific time

  • Hi All,

    Is it possible to truncate logfile in a specific time ?

    For example I want to truncate all transactions regards to one month ago and older.

  • A bit confused as to what you want.

    A) Are you saying the you want to truncate a table and it records that are older then x?

    B) Or do you want to shrink a transaction log file?

    If A. then why would you want to do this? Is quite simple to do BUT is destructive.

    If B. then again its easy to do but the log will just grow again and the more this happens the more you may fragment your disk ultimately leading to io slow down.

    Let us know.

    Adam Zacks-------------------------------------------Be Nice, Or Leave

  • Hi Adam,

    I mean something like B.

    I want to shrink logfile as I always have 4 weeks logfile. Can I ?

  • Is the database that uses this logfile in FULL or SIMPLE recovery mode?

    If its in FULL mode are you backing it up regularly?

    If i understand you right, you want to truncate a database logfile every 4 weeks. The proper way to control logfile sizes is through backups.

    By backing up a database it writes a checkpoint in the log, if you then backup the log it will allow the log to be shrunk:

    USE [MYDB]

    GO

    DBCC SHRINKFILE (N'MYDB_log' , 1)

    GO

    Once this is done the logfile (.ldf) file will be 1mb) but unless the log has been backed up or the database is in SIMPLE mode, this action will not effect the file size.

    Can you give some more information on your situation and what you are trying to achieve (i.e: what recovery model is database, size of files, database use, backup schedule etc).

    Sorry to go-on but this stuff is important to give you the right advise.

    Adam Zacks-------------------------------------------Be Nice, Or Leave

  • Adam,

    Thanks for your quick response.

    My database is in 'FULL' recovery mode and I know this command.

    Sorry I think I couldn't explain what I want exactly. I don't want to do truncate every 4 weeks.

    If possible, I would like to truncate logfile that contains last four weeks checkpoints on that and drop older transactions.

    Does it make any sense ?

    Thanks in advance

    Ali

  • If you are backing up your database logfile, at that point the data is essentially wiped and is recalimed as empty/free space (ie there is no data). At this point all uncommited trans have been hardened and the log is emptied.

    So once again we come full circle; are you backing up your database logs?

    Adam Zacks-------------------------------------------Be Nice, Or Leave

  • I do full backup every night and transaction log backup every hour and store on tape.

    When I do a shrink/truncate logfile and then a full backup, I lost my chain am I right?

    Ok now I want to decrease logfile size ==>> I should do shrink/truncate

    But I like to have 4 weeks chain.

    Ali

  • No, it doesn't make sense. Adam has provided good guidance.

    I cannot think of a reason why you would want to perform such actions.

  • If by chain you mean actual tape backups, thats down to your retention policies. As far as SQL is concerned, once you go through a db and log backup cycle, the log data is gone. Its zeroed because sql assumes that as you have backed it up, it is no-longer needed.

    And it is right to do so.

    In the end i think you are worrying for nothing. Your backup schedule seems adequite, maintain a healthy log size with the script i gave you (obviously adjust the sizing to maintain a healthy log) and chill.

    Oh and Steve, thanks for the defence but i think this is just confusion due to language (hopefully) 🙂

    Adam Zacks-------------------------------------------Be Nice, Or Leave

  • Adam,

    That's the point "Its zeroed because sql assumes that as you have backed it up, it is no-longer needed"

    Thanks for your time and sorry for my weakness in describing my problem.

    and thanks Steve for your comment.

    Regards

    Ali

  • Your welcome. Hope its clearer now.

    Adam Zacks-------------------------------------------Be Nice, Or Leave

  • Example database name: 'Cheerios'

    Cheerios_log: this is the physical file name of the log file

    Basic idea

    change the db recovery mode to simple

    clear log

    change the db recovery mode back to full

    DBCCShrink.sql - file path - C:\Shrink\DBCCShrink.sql

    ---------------------------------------------------

    USE Cheerios; (i guess can use master db here...)

    GO

    ALTER DATABASE Cheerios SET RECOVERY SIMPLE;

    GO

    DBCC SHRINKFILE (Cheerios_log, 1);

    GO

    ALTER DATABASE Cheerios SET RECOVERY FULL;

    GO

    ---------------------------------------------------------------------

    A .bat file which is scheduled, on the server, to run every weekend after the databases is backedup

    osql -S <servername> -U <username> -P <password> -i "C:\Shrink\DBCCShrink.sql"

    This clears the log file - reduces to 1 MB and frees the space for the next week

    You can create a job to run this sql at specified time

    Not the best of the solutions but works

    regards,

    ash

  • abemby (4/13/2010)


    Example database name: 'Cheerios'

    Cheerios_log: this is the physical file name of the log file

    Basic idea

    change the db recovery mode to simple

    clear log

    change the db recovery mode back to full

    DBCCShrink.sql - file path - C:\Shrink\DBCCShrink.sql

    ---------------------------------------------------

    USE Cheerios; (i guess can use master db here...)

    GO

    ALTER DATABASE Cheerios SET RECOVERY SIMPLE;

    GO

    DBCC SHRINKFILE (Cheerios_log, 1);

    GO

    ALTER DATABASE Cheerios SET RECOVERY FULL;

    GO

    ---------------------------------------------------------------------

    A .bat file which is scheduled, on the server, to run every weekend after the databases is backedup

    osql -S <servername> -U <username> -P <password> -i "C:\Shrink\DBCCShrink.sql"

    This clears the log file - reduces to 1 MB and frees the space for the next week

    You can create a job to run this sql at specified time

    Not the best of the solutions but works

    regards,

    ash

    This is not the point. Firstly i have already given virtually identicle advice and secondly the initial question here was whether you could keep data in a log following a backup and the answer was no.

    What you have suggested is not advisable at all! The only reason for doing this would be if you were about to run out of space completely and HAD to shrink the log.

    To start with your script paid no attension to the fact that you would loose the log data and so a backup of db and log is advised. However in such a situation it is likely that a db or log backup would fail, so essentially youd be left with SIMPLE --> SHRINK --> FULL (your script as above) or to put it another way DESTROY LOG DATA.

    I really hope you dont do this in a PROD environment often.

    Adam Zacks-------------------------------------------Be Nice, Or Leave

  • Hi Adam

    I did mention that this is done after database is backedup... i completely agree with you... may be i should have highlighted it in bold.

    ... >> A .bat file which is scheduled, on the server, to run every weekend after the database is backedup ...

    To add here - there is no activity in between the backup and shrinking - this ensures that you have a backup of the latest dataset to start the new log...

    Also due to space contraints, as you have mentioned, destroying is necessary

    bigger and better hardware infrastrutcure might not need this - but its not that bad as you have pointed out.

    And finally not to take the credit from you 🙂 - posted to give a full cycle of the job

    cheers

  • Fair enough.

    You shoould be aware though that having a logfile shrink, grow, shrik, grow like this will eventually see it fragmented all over your disk leading to long io's.

    Adam Zacks-------------------------------------------Be Nice, Or Leave

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply