March 12, 2011 at 1:27 pm
I cannot thank Craig Farrell, Gail Shaw, Lutz, and Ninja enough for their mentoring and advice as I have been preparing to deploy this new system over the past few weeks. There were others too - forgive me for not being able to recall everyone for the shout out.
So here it is. I've been waiting for 8 weeks to finally get my hands on the server (which is colo-hosted outside of town). Via VPN and Remote Desktop Connection, I reconfigured the RAID array to a be a little more specific to a lot of the wisdom I have received out here from Craig. Now - I know I should have gone with RAID 1+0, but it just wasn't going to work in this current configuration. The server is a Dell R710 running Windows 2003 Server 64-bit along with SQL Server 2005 64-bit Standard Edition (SP3). I have 8x146gb 2.5" form factored SAS drives in the internal array. The controller is a PERC 6/i. Initially I was going to go with RAID 0 for my OS and .LDF's, but ultimately decided against this (so my manger wouldn't be getting a phone call at 2am when he's enjoying a drink or 3, down at the bar, and in no condition to be messing around with a down server). So - with that being said - here is the server's configuration:
RAID 1: Windows 2003 Server 64-bit, SQL Server 2005 64-bit Standard Edition, Web Server software (IIS and other services to be added). NTFS format - this is the C:\DRIVE.
RAID 1: NTFS formatted drive...Nothing buy LDF's under a directory called \LDF. This is the D:\DRIVE
RAID 0: NTFS formatted drive...1 directory called PRIMARY. Only file in that directory is tempdb.mdf. This is the F:\DRIVE.
RAID 5: NTFS formatted drive...Multiple directories as follows: \PRIMARY
\ROFG1
\FG1
\FG2
\FG3
\FG4
\FG5
\C_INDEX
\NC_INDEX This is the G:\DRIVE
So here is my 3 part question, and Craig - forgive me if this seems like a repeated conversation, but now that I have everything configured, I just want to make sure I put best practices to work here and the best possible configuration together (given the other issues I see possible from the way things have been constructed to date...e.g. database never being normalized, issues in C# coding, etc...).
1. What is the best way for me to set up the system db's? By this I mean, what should the starting sizes be for master, msdb, model, and temp. I have them as their default sizes from when the colo-host installed everything, and I have moved them from the C:\DRIVE to the directories called PRIMARY (tempdb in the lone F:\DRIVE and the master, model and msdb, along with the other 2 db's .mdf files which escape me in name at the moment, in the G:\DRIVE's PRIMARY directory). I know this may be hard to speculate on since you guys are not familiar with our system, but lets just say if there is a rule of thumb for deployments to systems that have 5 user created db's, and the system over the next 5 years will probably not grow beyond 250gb, what would I want to set the system db's as? I've been told to turn auto-growth off for tempdb, and others have said to use percentages for db growth as opposed to megabyte specs. I've been told to set tempdb for the top of the ceiling to the given partition that the .mdf and .ldf are on, but this seems risky to me for the other .ldf's where tempdb's .ldf is going to reside.
2. This system is going to have 5 databases to it. I am splitting up the tables, clustered indexes and non-clustered indexes across the board into the various FG# directories you see (each directory for the FG's and the ROFG, C_INDEX, NC_INDEX represents a Filegroup that will house those specific objects). There is only one Read-Only Filegroup (called ROFG1) for static tables for my main production db out of the 5. My question here is actually specific again to setting up the thresholds / sizes for the db's upon creating them. Do I give them small settings and then specify max settings in my DDL scripts for the data and logs? All 5 db's will be created from T-SQL DDL. I will initially create them in SSMS first, and then run the various pieces of the DDL for that db to put objects in place (tables, logins, FK's, some INSERTS of static data, DML Triggers, etc...).
3. Lastly - memory pools. With this being a 64-bit system, I want to get it right. Given that the OS drive is mostly for the OS and SQL (and a web server, which I wasn't able to get them to consider installing elsewhere), I need to make sure that a page file is created if necessary. The server has 8gb of RAM. The main C drive is also where I am going to have my .bak's go for the backups (again - was going to put them out on the array, but Craig really saved my Bacon there by telling me, no no no no! Bad plan! Sit! hehe - nod to Gail's upcoming topic for 24 hours of PASS). So, with the databases all starting out small, and again - no plans to exceed the maximum array storage threshold over the next 5 years, what should I do about a paging file for the server? Let Windows manage it on the fly? Set up a static page? None at all? Keep in mind that I need about 250mb at most to start for my backup file, and that is a very liberal estimate. Then - with regard to the SQL memory pools, what is the best way for me to set them up properly? I know with a 64-bit system, I have the ability to exceed the 2gb/3gb limits of the 32-bit environments, but how do I do this right?
I cannot thank this community enough for all of it's support, and non-judgemental love of a newbie. It is such a pleasure to become a DBA. I value all of your input, and cannot thank you all enough for all of your support.
Please let me know if I may provide more specific details here (e.g. creation scripts, other details, etc...).
Have a great weekend! I have to have this completed by Friday this coming week, and I am headed off for full-time employment with a SQL MVP! I'll share more about that later on.
Thank you all again in advance!
🙂
Rich
March 12, 2011 at 1:49 pm
Rich Yarger (3/12/2011)
By this I mean, what should the starting sizes be for master, msdb, model, and temp.
Master, model and msdb you can leave at default. TempDB not so much
I know this may be hard to speculate on since you guys are not familiar with our system, but lets just say if there is a rule of thumb for deployments to systems that have 5 user created db's, and the system over the next 5 years will probably not grow beyond 250gb, what would I want to set the system db's as?
Errr... watermelon?
Seriously, the size of the user databases does not determine the size of TempDB. A tiny database can have queries that use lots of TempDB and vis versa.
Set the default size to something sensible (couple hundred MB to start), set the autogrow to a sensible fixed value (so that the time it will take to grow is consistent) and then monitor it. If you see TempDB never getting larger than a couple hundred MB then you have your size. If you see it reaching many GB, then either set it to that size or tune your queries.
I've been told to turn auto-growth off for tempdb, and others have said to use percentages for db growth as opposed to megabyte specs.
Ouch. Neither.
Do I give them small settings and then specify max settings in my DDL scripts for the data and logs? All 5 db's will be created from T-SQL DDL. I will initially create them in SSMS first, and then run the various pieces of the DDL for that db to put objects in place (tables, logins, FK's, some INSERTS of static data, DML Triggers, etc...).
Set them to a size you expect them to be in a few months. Only set max size if you really know that they will not get that big.
The main C drive is also where I am going to have my .bak's go for the backups (again - was going to put them out on the array, but Craig really saved my Bacon there by telling me, no no no no! Bad plan! Sit! hehe - nod to Gail's upcoming topic for 24 hours of PASS).
Not talking about those kind of plans.
I don't even like the backups on the C drive. Get them elsewhere (tape or offsite)
So, with the databases all starting out small, and again - no plans to exceed the maximum array storage threshold over the next 5 years, what should I do about a paging file for the server?
If SQL is using the page file, things have already gone wrong. SQL uses memory to avoid the cost of going to disk. Using disk that's pretending to be memory to avoid the cost of going to disk is as silly as it sounds
Then - with regard to the SQL memory pools, what is the best way for me to set them up properly?
Well the only thing you can set is min and max memory. Is this server dedicated to SQL? Is there only one instance?
Have a great weekend! I have to have this completed by Friday this coming week, and I am headed off for full-time employment with a SQL MVP! I'll share more about that later on.
Oooh. May I ask who?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 12, 2011 at 2:07 pm
Gail!!! 😀
Hello to you! Thank you so much for this advice and mentoring! Here are the particulars:
- tempdb: He's all by himself out on the RAID 0 drive, because - he's tempdb. If the drive crashes, then so be it - swap in a new one and be done with it. That was my thinking was to isolate him to a drive by himself. I wanted it to be a 15k drive, but alas - 10k was all I could get.
- User created db's: I knew this was going to sound like the newbie I am. Seriously - with my developers who need to be shot, making their factories all SELECT * or something rather close to that (e.g. including columns not really needed for that query), was my reason for thinking about setting tempdb's thresholds high to start. I'm not going to be around on this assignment after this coming Friday to baseline/monitor the situation, and I guess I was thinking that more space was better than too little. I plan to include this fact in my documentation for the project, so they can adjust accordingly.
- "Ouch! Neither!": OK - so I will not turn off auto-growth, nor use percentages to grow the db's (system ones and user created). I am thinking that for the log the default growth of 10% is fine, and for the user created ones - 1mb at a time is also alright?
- DDL creation: Again - I'll just go with the defaults.
- BAD PLAN! SIT!: hehehe - I knew you were speaking on maintenance plans from what Caprice told me out on LinkedIn, but I thought it was kinda funny and semi-relative. 😉
However - the only spot I have to put the backups to is the C:\DRIVE, and they will be put onto tape - nightly, held in a vault for 2 weeks, and off-site for up to 21 days. I'm thinking this is the best scenario for the hardware I have to work with.
- Windows 2003 Virtual Memory: I wasn't sure what to do here. I'm guessing to just let windows dynamically handle it's paging file needs? I agree that it shouldn't need to do anything with the hard installed RAM, but wasn't sure because - so much bad info out there from "Don't even use a paging file" to "make it the same size as the amount of hard RAM installed". Ish - thus why I am lucky to have such a great forum like ours out here!
- SQL Memory Pools: Do I do the min and max at the end after I have my 5 User db's in place? Or do it now before I install/create those db's?
- SQL Server MVP: Jason Strate! He writes the blog called "Strate SQL". He and I ran into each other out on LinkedIn, after I created a local SQL Server Users group for my area of the country here in Minneapolis / Saint Paul. I was completely ignorant of PASS at the time (even though I had joined that group out on LinkedIn). So - I just accepted a full-time op with his company, and start on the 21st! Very excited! 😀
Gail, thanks so much! You are the total SME for me and so many others!
🙂
March 12, 2011 at 2:09 pm
Gail - P.S. The server is not completely dedicated to SQL. The WEB Server and services are also going to be on it.
:blink:
March 12, 2011 at 2:12 pm
P.P.S - only 1 (Default) instance.
🙂
March 12, 2011 at 2:52 pm
Rich Yarger (3/12/2011)
I guess I was thinking that more space was better than too little. I plan to include this fact in my documentation for the project, so they can adjust accordingly.
Absolutely.
I am thinking that for the log the default growth of 10% is fine, and for the user created ones - 1mb at a time is also alright?
If you want your DBs to limp along like a crippled sheep, yes. If you want them to run fine, not so much.
Set fixed growth sizes, not percentages. That goes for everything. Set the growth increments high enough that they won't grow often and won't have thousands of VLFs, but low enough that the growth won't take a week. Remember logs cannot be instant initialised.
I knew you were speaking on maintenance plans from what Caprice told me out on LinkedIn
Please inform Caprice (whoever he is) that the word "Maintenance" does not appear anywhere in my slide deck... 😀
Do I do the min and max at the end after I have my 5 User db's in place? Or do it now before I install/create those db's?
Doesn't matter. Same effect.
If there are other services on the server, then the max memory needs to be set low enough that there is memory available for them as well as for the OS. Not knowing how much IIS wants, it's hard to judge. Maybe 4GB for SQL. I wouldn't go higher than 6.5 on a dedicated SQL box, so....
- SQL Server MVP: Jason Strate! He writes the blog called "Strate SQL".
I know him. Tell him I said 'hi'
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 12, 2011 at 4:13 pm
Gail, thank you so very much! I will heed this wisdom, and apply accordingly. I'll probably bug you again with the ideas I have for the Filegroup backups during the week this week.
P.S. Caprice is a gal - not a dude. And I misspoke - she didn't tell me that you were doing a maintenance plan discussion, but that was my assumption. I know, I know. See what happens when you "***"ume?!? 😛
I am signed up for your presentation, and cannot wait!
I'll tell Jason we spoke out here, and tell him hello for you. He is a cool dude, and I cannot wait to begin working with him!
Take care Gail,
Rich
March 12, 2011 at 4:39 pm
Just one last point
Rich Yarger (3/12/2011)
- tempdb: He's all by himself out on the RAID 0 drive, because - he's tempdb. If the drive crashes, then so be it - swap in a new one and be done with it.
Yes, if it crashes you can just swap the drive out, but SQL will be down until you do. No TempDB - no running SQL service. What's the downtime allowances for these DBs and how long will it take to acquire and replace a drive?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 12, 2011 at 4:44 pm
Gail - great question, and one I was able to develop around because of an interview you gave not that long ago (on the Dos and Don'ts with that guy from Red Gate), where you discussed the importance of knowing your SLA's. The window is 2 hours (regardless of the disaster), so with that being the case, for non-planned outages, I decided it best to put him in a place where he would be fast, and easier to access, than in another part of the array.
As for access to the hardware - I have submitted a proposal to the manager for 1 stand-by drive for each one in the array (8 total). That might be overkill, but when you only have 2 hours of downtime allowed, every minute counts!
See how much I have learned indirectly from you, without you even knowing it?!?
🙂
March 13, 2011 at 3:30 am
Rich Yarger (3/12/2011)
Gail - great question, and one I was able to develop around because of an interview you gave not that long ago (on the Dos and Don'ts with that guy from Red Gate)...
Someone somewhere is crying. 😉 The Dos and Don'ts of corruption was a Quest webcast, not one from RedGate.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 13, 2011 at 11:29 am
Errr....Ummm...ponder-ponder-ponder
:blink:
😛
😀
March 14, 2011 at 4:56 pm
Oh my. I appear to have learned something the hard way, yet again. I was modifying my db creation scripts as follows...
small table example
/****** Object: Table [dbo].[AlertLog] Script Date: 03/14/2011 17:41:42 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[AlertLog](
[ID] [int] IDENTITY(1,1) NOT NULL,
[AlertID] [int] NOT NULL,
[TreaterID] [int] NOT NULL,
[DTRaised] [datetime] NOT NULL,
[Description] [nvarchar](255) NULL,
[Priority] [nvarchar](20) NULL,
CONSTRAINT [PK_AlertLog] PRIMARY KEY CLUSTERED
(
[ID] ASC,
[AlertID] ASC,
[TreaterID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [C_INDEX]
) ON [FG1]
GO
Notice how I have C_INDEX where [PRIMARY] used to be (thinking that would be the Filegroup that my index would get stored in), and how I have [FG1] for what I thought was the place for my FG1 Filegroup for my table? Is there any way for me to have both my C_INDEX and FG1 Filegroups used in the DDL or did I make a rather bad assumption here? Is there a setting in SSMS that will allow this to be seen in the manner in which is seems (the C_INDEX for my CLUSTERED INDEX, and FG1 for my Table)?
When I Right-Click the table in SSMS and look at the properties, it says the Filegroup is C_INDEX. Same thing when I script it out. The DDL shows the C_INDEX Filegroup, but the FG1 is not where to be found.
Can I specify both of these (and other) Filegroups when I create my tables/indexes in the DDL, or have I made a rather bad assumption here?
Thanks again to you all for the help.
March 14, 2011 at 5:17 pm
Gail as usual did her incredibly thorough job of hitting all the things I noticed, and of course, a few I didn't think of at first. 🙂
One thing she didn't comment on was your log growth planning. Hit up VLF on msdn to figure out what they're talking about at a high level, and then Check out this article on VLF's by Kimberly Tripp and her sizing reviews:
http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
Regarding your TEMPDB sizing, I'd personally, since you've dedicated the drive to it, use it as RAID 1 so a drive can fail and not take your server with it, and grow it to ~50% of the size with a 25% sized ldf, with instructions for monitoring to whomever is going to administrate when you leave. That'll leave you 25% worth of growth if necessary as well as saving time later if it needed to autogrow. If it's still too small, THEN swap it to RAID 0 with the understanding of the risks.
Regarding filegroups, you're under a misapprehension. The Clustered index IS the table... roughly. An index is a B-Tree with a leaf level of data. The leaf level for the clustered index happens to be the actual table's storage, ordered by the clustered index. The index itself is relatively lightweight comparitively. When you want to change a table's filegroup, you move its clustered index.
Assigning a separate filegroup on an index is only useful for non-clustered indexing.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
March 14, 2011 at 9:21 pm
Hi Craig!
Thank you so much for this input. I was definitely operating under a very bad, rookie misunderstanding, of how to divide up my objects from DDL creation. I guess what I am confused on is how do I then get the Non-Clustered Indexes into the Filegroup I have for them, as I run the DDL creation script? From what I have found out, the 2 places when creating a table, wind up being the same Filegroup (hence the example I shared). If you can shed some light on that, while I check out the MSDN piece on VLF's and the article by Kimberly Tripp, that would be great.
As for tempdb - that was what I was going to do, but I ran out of drives. I only have 8 to work with - thus why he wound up on a RAID 0 drive. The understanding is if the drive goes bad, then it needs to be swapped and put back in place immediately, is acceptable for this project (I explained this much as I presented the configuration), and they have decided to go with the backup drives I recommended for the SLA's we need to meet for this particular system. Ever though I won't be there after this Friday, I do hope at some point (as the success of the project goes forward) that they do wind up with an external array that can then be configured as RAID 1+0, and place him over there. That is the plan at least. Plus the benefits you and I have previously discussed about creating another Read IO for SQL. Wonderful planning for performance benefits later on! 🙂
Thanks again for this background, and I will now check out this reading you have recommended. It seems I will need to ditch the C_INDEX Filegroup, as it obviously is not going to work the way I had initially thought and intended.
March 14, 2011 at 9:34 pm
Rich Yarger (3/14/2011)
I guess what I am confused on is how do I then get the Non-Clustered Indexes into the Filegroup I have for them, as I run the DDL creation script?
You're on the right track. CREATE CLUSTERED INDEX ... ON [FG1] (that puts the table on FG1). CREATE NONCLUSTERED INDEX idx_sametable_nc ... on [FG2] (that'll put the NC index on FG2, really useful for keylookup or smaller data lookups).
As for tempdb - that was what I was going to do, but I ran out of drives. I only have 8 to work with - thus why he wound up on a RAID 0 drive. The understanding is if the drive goes bad, then it needs to be swapped and put back in place immediately, is acceptable for this project (I explained this much as I presented the configuration), and they have decided to go with the backup drives I recommended for the SLA's we need to meet for this particular system.
Yep, but RAID 0 requires two drives, as does RAID 1. Instead of using the two for RAID 0, I'd recommend using them as RAID 1. Yes, you lose half the read/write speed and space, but it's better then a critical server failure. However, I can't make that call, and have thrown caution to the wind RAID 0s before. Just as a general practice unless you know you need the throughput is to protect the server's online integrity before protecting its speed.
Also, thanks for the kind words at the beginning of this post, and the company you've included me in. Things like that really make it worth hanging around here. :blush:
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
Viewing 15 posts - 1 through 15 (of 30 total)
You must be logged in to reply to this topic. Login to reply