Log file data transfer amount (MB) vs data file trasfer anount (MB)

  • In the full recovery model, if i run a transaction that inserts 10MB of data into a table, then 10 MB of data is moved in the data file. Does this mean then that the log file will grow by exactly 10MB as well?

    I understand that all transactions are logged to the log file to enable rollback and point in time recovery, but what is actually physically stored in the log file for this transactions record? Is it the text of the command from the transaction or the actual physical data from that transaction?

    I ask because say if I have two drives, one with 5MB/s write speed for the log file and one with 10MB/s write speed for the data file, if I start trying to insert 10 MB of data per second into the table, am I going to be limited to 5MB/s by the log file drive, or is SQL server not going to try and log all 10 MB each second to the log file??

  • william.rees.howells (3/19/2014)


    In the full recovery model, if i run a transaction that inserts 10MB of data into a table, then 10 MB of data is moved in the data file. Does this mean then that the log file will grow by exactly 10MB as well?

    No, it will quite likely be significantly more.

    william.rees.howells (3/19/2014)

    I understand that all transactions are logged to the log file to enable rollback and point in time recovery, but what is actually physically stored in the log file for this transactions record? Is it the text of the command from the transaction or the actual physical data from that transaction?

    Its a proprietary file format which isn't published. It stores user commands at a very low level and includes records of the SQL Engine changes and page allocations/splits amongst other things. This allows for an engine to replay the log precisely when restoring ensuring the data files are identical.

    william.rees.howells (3/19/2014)I ask because say if I have two drives, one with 5MB/s write speed for the log file and one with 10MB/s write speed for the data file, if I start trying to insert 10 MB of data per second into the table, am I going to be limited to 5MB/s by the log file drive, or is SQL server not going to try and log all 10 MB each second to the log file??

    You will be limited by the drive log speed if you manage to reach that physical limit. You are better having the log file with a faster write speed. Data page modifications are first done in memory and are written "later on" by db checkpoints and the lazy writer, but a transaction cannot be completed until it is written to the log. Simply because a change has been committed doesn't mean the data page is on disk. This is why the transaction log is so important.

  • Ok, so if we take my example again where I want to write 10MBs to a table every second, the drive that my log file resides on would have to have write speeds matching or beating 10 MB/s?

    In regards to checkpoints and lazy writes then I could potentially have a data drive with write speeds slower than 10 MB/s as long as 10 MB/s isn't being written to the table continually and there are breaks in between for the lazy writer to catch up?

  • william.rees.howells (3/19/2014)


    Ok, so if we take my example again where I want to write 10MBs to a table every second, the drive that my log file resides on would have to have write speeds matching or beating 10 MB/s?

    In regards to checkpoints and lazy writes then I could potentially have a data drive with write speeds slower than 10 MB/s as long as 10 MB/s isn't being written to the table continually and there are breaks in between for the lazy writer to catch up?

    I think its extremely unlikely you'll see 10Mbs a second in many places. The's 600MB per minute or somewhere over 800GB of I/U/D's per day. As dba's we tend to focus on IOPS rather than MBs.

    You are correct, that you would need to have a faster log drive to keep up with the writes and the data drive could catch up but this would impact on your reads as well.

    Generally speaking the transaction log and tempdb need the highest and most resilient drives.

  • Thanks for the answer. I was just using 10 MB/s as an example, our database wouldn't even begin to come close to that.

    In terms of what you were saying about slowing the reads down, would the most optimal scenario be to have both data drive and log file drive with equal read and write speeds then?

  • william.rees.howells (3/19/2014)

    In terms of what you were saying about slowing the reads down, would the most optimal scenario be to have both data drive and log file drive with equal read and write speeds then?

    Its a depends, there's no "optimal" solution. Its what fits the business needs and the use of the database. Some systems are highly write intensive with limited reads, others are the complete opposite. You would need to assess the workload.

  • Take a look at what your app's doing.

    I checked one of my apps; for mostly small DML in tiny transactions, I can see log files with many times the data written (2x to 10x) as the data file had written. For large DML, the amount of data written tends to be more similar.

    SELECT sd.name, sdivfs.num_of_writes, sdivfs.num_of_bytes_written, smf.type_desc, smf.physical_name, sd.recovery_model_desc

    FROM sys.dm_io_virtual_file_stats(NULL,NULL) sdivfs

    INNER JOIN sys.master_files smf

    ON smf.database_id = sdivfs.database_id

    AND smf.file_id = sdivfs.file_id

    INNER JOIN sys.databases sd

    ON sd.database_id = smf.database_id

    ORDER BY sd.name, smf.type_desc, smf.file_id

    SELECT sd.name, SUM(sdivfs.num_of_writes), SUM(sdivfs.num_of_bytes_written), smf.type_desc, MAX(sd.recovery_model_desc) AS recovery_model_desc

    FROM sys.dm_io_virtual_file_stats(NULL,NULL) sdivfs

    INNER JOIN sys.master_files smf

    ON smf.database_id = sdivfs.database_id

    AND smf.file_id = sdivfs.file_id

    INNER JOIN sys.databases sd

    ON sd.database_id = smf.database_id

    GROUP BY sd.name, smf.type_desc

    ORDER BY sd.name, smf.type_desc

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply