December 9, 2004 at 11:15 am
Is there any value in having more than one file for a transaction log? Does the size of the database matter? Does the number of processors matter? How are the logs processed and backed up?
What do you think?
Steve
December 9, 2004 at 1:01 pm
1. No, IMHO not any value
2. No, not really.
3. No, why should they?
4. This is very detailed in BOL. Please read this first and ask when you don't understand something there.
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
December 9, 2004 at 1:09 pm
Well we had a report from a Microsoft Conference that the number of files for transaction logs should match the number of processors on the server, thus preventing I/O contention when a processor writes the log.
Perhaps this was bad information?
Steve
December 9, 2004 at 6:43 pm
1) Only if they are already on a seprate drive/drive array from your data files and they themselves are on seperate drives/drive arrays from each other. They will use a simple striping method which will be able to use both files at once increasing IO throughput but may not be significant depending on your hardware.
2) No number and size of transactions are all that matter here and a database used primarily for reads will not have many logged transactions. How big is the file getting say over the course of a week or a month. If only a few megs then I doubt you will see any improvements.
3) Maybe a little, the number of controllers available to handle writes to the disks is more important. And even thou an array may have multiple drives which means multiple contollers you already have a more complex stripping going on depend on the RAID implementation unless your are mirroring.
4) The files themselves are serial strings but breaking the stream in two will allow simultaneous writing to each file for logging. Backups are done as a single unit.
Overall opinion is that more than all likely you are not going to see a significant improvement in performance especially if the current transaction log in on the same drive/array as the data files.
Suggest you read in BOL
"Transaction Log Architecture"
"Transaction Log Logical Architecture"
"Transaction Log Physical Architecture"
"Virtual Log Files"
"Using Files and Filegroups"
"Transaction Log Backups"
"Optimizing Transaction Log Performance"
"Optimizing Backup and Restore Performance"
December 10, 2004 at 9:34 am
Sorta related...is there any value in having different data files on the same disk? I am inclined to think there is no value but I am finding a lot of vendors set up databases this way.
December 10, 2004 at 3:19 pm
If you are after speed, try adding solid state disk just for your transaction logs. In a benchmark about 5 years back and the results on high volume insert/update/delete were astounding (300% speed increase in throughput over standard disk). However, it is a bit costly.
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
December 10, 2004 at 4:23 pm
Miminmal at best in most all situations. However can impact file fragmentation on the hard drive. And then you might just want it that way for backup and restore purposes. And finally had behavior problems in SQL where you could not shrink a file down when it was mixed with text columns, even if you delete a lot of the data rows.
The big thing is backup and restore purposes. For example on large databases I like to place support tables (those which rarely change) on a seperate file group. Then I don't have to backup that portion of my DB as often and can reduce backup time. In addition if someone makes a mistake in one of those tables or they become corrupt in some way I can simply restore the last good copy and move forward in most cases and again because this is just the support tables it means a faster restore time than doing the whole db. You just have to remember to follow the same kind of rules as the master db in that you back it up when the data is changed or you do a service pack/hotfix.
ANother way is to leave only the system tables on the primary and move all other bits off to a new file. This means should a system table corrupt you can restore just the primary file group and as long as you took a backup after the last schema change or login change you can be back up quickly without loss to other data.
But as far as performance I have seen a few occasions were we would see a difference in performance but not enough to make you run out and tell the world it is better than sliced bread. But when you can split to different drives/drive arrays you can see contention go down between say the log file and db, get better thruput with index on different drive from main data (but goal is to get log to own drive first).
December 14, 2004 at 11:34 am
Back to the original comment...
What if the database is receiving 300 committed transactions per second? What if its 60K per second?
People having a large volume of transactions beleive that many physical files will pervent I/O bottlenecks by providing another path to the disk.
The discussion here seems to say that none of that matters and many physical log files are treated as one logical file.
Can there be more than one Log_writer process?
Thanks for the information.
Steve
December 14, 2004 at 12:19 pm
Read here on Performance Tuning.
http://www.microsoft.com/technet/prodtechnol/bosi/maintain/optimize/bostune.mspx
Every bit of documentation I so far have read from MS states the log file is handle via sequential IO. So no matter how many log files you have the data will be placed one item at a time into the log files and more importantly into the vitual log in each file. The only way you should see a major advantage to file splitting is to place the data files into seperate files as they are processed thru random IO. But since seuential IO is the way Log FIles work thru tuning if you place the log files on a seperate drive from the data you will see a drastic increase in performance as long as the drives are not the same controller (IDE mostly) or the Array is on a seperate channel.
Oh also look thru here and there is a question on this which is says spliting the file on the same disk will not have an impact on performance.
http://www.microsoft.com/technet/community/chats/trans/sql/sql0513.mspx
December 14, 2004 at 12:51 pm
The discussion here seems to say that none of that matters and many physical log files are treated as one logical file.
Yes, SQL Server internally treats it as one virtual log file.
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
December 14, 2004 at 12:54 pm
I guess that was simpler.
December 14, 2004 at 1:18 pm
Why bother with something sophisticated, when good thing can be easy?
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply