February 2, 2015 at 8:29 am
Database File Placement Layout?
We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
Thanks in advance.
February 2, 2015 at 8:42 am
Also, are there any benefit to having multiple OLTP Database Log files
I'm also interested to hear this, but we have had a lot of success with a multiple file TempDB. You should consider that in your design.
February 2, 2015 at 8:58 am
Performance wise, like you already assumed, there is no benefit of having multiple transaction log files. Best practice from my experience and from what i read in the past is to have the data files and log files on separate drives/disks.
February 2, 2015 at 10:49 pm
HookSqlDba7 (2/2/2015)
Database File Placement Layout?We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
Thanks in advance.
What about backups?
--Jeff Moden
Change is inevitable... Change for the better is not.
February 3, 2015 at 8:23 am
HookSqlDba7 (2/2/2015)
1 TB Log File.
Wow
HookSqlDba7 (2/2/2015)
I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Data files, backups and indexes should all really go on RAID10. For data files and indexes you need fast write as well as read.
T-log files and and the tempdb files can reside on RAID1, do not have any more than 1 log file per database and separate tempdb data files and log file from the user databases
HookSqlDba7 (2/2/2015)
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”Thanks in advance.
No benefit to having multiple log files
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
February 3, 2015 at 8:42 am
SSDs are relatively cheap today, so why not include them in the analysis? They perform excellent as well.
Igor Micev,My blog: www.igormicev.com
February 3, 2015 at 10:42 am
Performance wise, like you already assumed, absolutely there is no benefit to having multiple log files.
Chalapathi Pagidakula
SQL-DBA
March 11, 2015 at 5:46 pm
Thanks for all the responses. To clarify the statement, "We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File." I am finding out more. Our Database will initially be about 510 GBs and the Log File will not get that huge becasue we will be doing log backups. We will also be using filegroups.
Now, for our user database, I am trying to get a feel for how many data files to use and what size (initial and growth and max) to assign to each data file.
For a database of this size, would having 10 data files defined as follows be reasonable?
Initial Size: 50 GBs
Growth Rate: 5 GBs (sounds like a lot, I will have to read about Instant Initialization)
Max Size: 100 GBs
I know I am probably not giving you all the information you need but I am just looking for some reasonable number of data files and sizes. I can tell you we have a powerful server (huge amount of RAM and CPUs) and we are using the IBM XIV Storage System.
As far as Temp DB, the initial size would be about 30 GBs. I have read the information/articles on splitting the Temp DB data files into equal sizes per core (up to a maximum of about 8 data files.) If you have any suggestions on what would be a reasonable size for the Temp DB data files, please let me know. (I could divide 30 GBs by 8 data files and use about 4 GBs per data file.)
Any information/suggestions are appreciated.
March 11, 2015 at 6:29 pm
HookSqlDba7 (3/11/2015)
Thanks for all the responses. To clarify the statement, "We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File." I am finding out more. Our Database will initially be about 510 GBs and the Log File will not get that huge becasue we will be doing log backups. We will also be using filegroups.Now, for our user database, I am trying to get a feel for how many data files to use and what size (initial and growth and max) to assign to each data file.
For a database of this size, would having 10 data files defined as follows be reasonable?
Initial Size: 50 GBs
Growth Rate: 5 GBs (sounds like a lot, I will have to read about Instant Initialization)
Max Size: 100 GBs
I know I am probably not giving you all the information you need but I am just looking for some reasonable number of data files and sizes. I can tell you we have a powerful server (huge amount of RAM and CPUs) and we are using the IBM XIV Storage System.
As far as Temp DB, the initial size would be about 30 GBs. I have read the information/articles on splitting the Temp DB data files into equal sizes per core (up to a maximum of about 8 data files.) If you have any suggestions on what would be a reasonable size for the Temp DB data files, please let me know. (I could divide 30 GBs by 8 data files and use about 4 GBs per data file.)
Any information/suggestions are appreciated.
I wouldn't just automatically make a herd of data files and file groups. And I don't mean this to sound nasty but the written word doesn't carry the friendly/helpful tone here.
It's time to sit down and do some intelligent planning (that sounds nasty and I mean it just as an adverb rather than a description of any particular short coming). For example, you should plan now to properly partition any possible "worm" tables (audit tables, invoice/invoice detail tables, etc, 1 filegroup per month, 1 file per filegroup, for example), a possibly different partitioning scheme for table more in the OLTP realm, having a file group(s) that contain only static lookup data, a plan for archives (usually but not always involving some form of partitioning), and... you get the idea.
Now, the key to all of that is actually the "Disaster Recovery Plan" and the "Restore Plan" (which should be a part of the "Disaster Recovery Plan". Obviously, the PRIMARY FILEGROUP will have to be brought back online in any DR plan but then you should plan on what is immediately important to bring back online to "get back in business" as quickly as possible. Obviously, bring 500GB of audit table information shouldn't be a priority there. You should only (for example) bring the last month or two back and you'll obviously only want to bring the last month or two of customer transactions. You should also try to plan for "Online Piecemeal Restores" if just one of the file groups goes south on disk. Remember that the final layer of protection isn't all this magical "always online" stuff. If all your hardware fails, restores from backups are the final savior. If you consider properly partitioning your nasty large tables (the "worm" tables will become the largest) that contain old data that will NEVER change, you can do some tricks to do a final reindex of those partition/filegroups/files while squeezing every last ounce of freespace out of them, set them to READ_ONLY so that you never have do any further maintenance on them, backup them up one final time (right after they become read only), and then never have to mess with them (I only have to backup 1/64th {current month} of a 370GB table each night, for example, ever again and I can restore any month of data in just a couple of minutes instead of hours to restore the whole table).
Don't throw this opportunity to get it all right the first time away. Take a little extra time to think about all the things I've said. Life could be really easy.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 11, 2015 at 7:24 pm
A big and interesting system. You have to specialize on it using the maximum of the Enterprise features of SQL Server. For e.g.:
- Partitioning is something that you must include. As Jeff describes, it offers work with small portions of data of tables instead of with the whole data unnecessary. Maintenance is benefiting a lot from it.
- In-memory is something that must be experienced for busy transaction tables. It's a great feature.
- Compression of read intensive indexes could save you a lot of space in the RAM.
I found out the Delayed durability feature as very good for the sql server 2014 instances I'm working on. This could be also planned. Additionally, sql server 2014 offers Encryption for backups as an excellent new feature.
It is worth to spend more time on analysis and planing for your system.
I'd like to see other proposals of this interesting thread.
Igor Micev,My blog: www.igormicev.com
March 13, 2015 at 1:42 pm
RonKyle (2/2/2015)
Also, are there any benefit to having multiple OLTP Database Log files
I'm also interested to hear this, but we have had a lot of success with a multiple file TempDB. You should consider that in your design.
Log files are written sequentially so having multiple log files won't really help. Multiple TempDB DATA (.mdf) files is recommended but you should only have 1 log file as only 1 log file can be written to at a time.
Paul Randal did a great write up on multiple log files: http://www.sqlskills.com/blogs/paul/multiple-log-files-and-why-theyre-bad/
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply