database and transaction logs on same drive?

  • Why is it not recommended to have the database and transaction logs on the same drive? What will be the cause if ever?

  • the main reason given would be that data files are accessed randomly whereas log files are written to sequentially. to optimise performance it is best to keep log files on their own drive.

  • There are 2 main reasons that it is not recommended. The first reason that each time that you modify data, the server has to write the modification on both the log and on the data file. If both files reside on the same drive then both will โ€œcompeteโ€ with each other to get the resource to write on the disk. This will cause longer time for data modification statements.

    The second reason is that each file has a different behavior attribute. The data file is accessed randomly (each time you want a different record from a different table. Each record and table are located at a different place on the file). The log file on the other hand is accessed more in a sequentially way and not in a random way. Also in most of the situations the log is much more write intensive then the data file in terms of percentage of read operation vs. write operations. Because of the difference in the way that the log file and data file are being used by the server, each file has a different RAID that is suitable for the file. If you put the data and log files on the same drive, then chances are that one of them resides on a RAID that will cause performance degradation.

    Adi

    --------------------------------------------------------------
    To know how to ask questions and increase the chances of getting asnwers:
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

    For better answers on performance questions, click on the following...
    http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • If you are dealing with a small db with low activity and/or are using it in a dev/test environment, it is *OK* to put both on same drive. The previous comments about performance degradation, of course, always apply.

    If, on the other hand, it is a large db, with high activity and/or in a production environment, do put your data and logs in separate physical devices. If the files are on a SAN, consult with your SAN tech to make sure the 2 drives are not striped across the same physical device or else it will be as though you had put the files on the same drive letter.

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • We put our log files on the E drive which is physically on the server, and our data files go on the SAN. Keeps them nice and separate.

  • bumey309 (9/1/2008)


    Why is it not recommended to have the database and transaction logs on the same drive? What will be the cause if ever?

    The biggest reason is simply performance. The more read head sets/spindles you can get involved, the faster you'll be able to read and write data. The MDF and LDF files make a good logical split for read head sets/spindles.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I surprised that the most obvious reason has not been given:

    If your data file disk fails you can recover to the failure point in time with the last full backup of the data and the logs (or if you are also doing period log backups the last full backup + log backups + final log, not yet backed up, on disk) - however if your logs were on the same disk as the data files chances are you would have lost the last log as well.

    All this has changed a bit with the intro of more complex storage arrangements such as RAID etc which increase the resilience of each "device" - but the basic premise still applies (i.e. that whole SAN can go pop - they do you know - and even if data is fully recoverable somehow off it will it be recoverable in your time scale or the SAN engineer's SLA time scale?).

    From a performance perspective this arrangement also means you can put the heavily written to log file on very fast striped arrays that are not very resilient - with data on slower more resilient storage such as RAID 5/10 etc

  • Yeaup... but if you loose the disk where the log file is, you're still going to miss data regardless of backups.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (9/2/2008)


    Yeaup... but if you loose the disk where the log file is, you're still going to miss data regardless of backups.

    I don't really agree - surely if the log disk fails the data will still be in the actual MDF files on the data disk - about all you should lose is the last open transaction when the log fails.

  • James Horsley (9/3/2008)


    Jeff Moden (9/2/2008)


    Yeaup... but if you loose the disk where the log file is, you're still going to miss data regardless of backups.

    I don't really agree - surely if the log disk fails the data will still be in the actual MDF files on the data disk - about all you should lose is the last open transaction when the log fails.

    Keep in mind page data is only forcebly written to disk at (instance) checkpoint time other writes are performed by the lazy writer processes.

    Log is directly written to log-file on disk.

    So, what can you lose if you lose the log file disk ? it depends on the latency of your lazy writer processes and the system checkpoint frequency.

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution ๐Ÿ˜€

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • James Horsley (9/2/2008)


    (i.e. that whole SAN can go pop - they do you know

    a well designed and configured SAN should be capable of losing a fabric and still be able to provide a path through to the back end storage.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" ๐Ÿ˜‰

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply