October 5, 2009 at 5:15 pm
I have a need to create a ndf file that will hold all the index data. Right now there is only one mdf file.
I was interested in this post:
http://www.sqlservercentral.com/scripts/Miscellaneous/31649/
The section under 'ISSUE CREATE INDEX STATEMENTS WITH DROP EXISTING' is not working. I used a print command there but it is not getting the commands in that section, as far as I can observe. Where it says EXEC (@ISQL), I put PRINT (@ISQL), but that did not work.
Can anyone comment on the code, and if I am missing something.
Alternatively, if you have code for this, kindly share.
Thx
Dan
October 5, 2009 at 5:17 pm
The goal is, I want to separate the index and data files on separate drives, for I/O performance issues. Right now, there is only one data file and one log file.
October 5, 2009 at 9:00 pm
Typically you separate out heavily used indexes from the tables, not necessarily all indexes. Or you move data from a heavily accessed table.
That being said, are you not getting anything output? What about SELECT @isql?
I typically use print with no parens, but it seems to work here.
October 5, 2009 at 9:18 pm
Steve
I have read some emphasis to take out all clustered index into .ndf; some take all non-clustered into ndf. I guess what you saying is regardless of C or NC, get the heaviest ones out into a seperate file, right?
I will try select @isql and let you know. I did not get anything for print (@isql). Did you use print (@isql) or print @isql, and did it print all the CREATE INDEX statements for the indexes?
Thanks
Dan.
October 6, 2009 at 7:24 am
Sorry, I didn't run that script. I didn't have time to dig through it last night before trying it. I've got a more testable instance now and I'll try it shortly.
I built some strings to be sure the parens weren't an issue, but that worked.
As far as the separation of indexes and data, it's about I/O. If you have large, heavily used indexes, then separating them from the data can improve performance if they are accessed at the same time as the data. This doesn't get a lot of benefits if you put the NDF on the same physical drives, so whether you have DASD or SANs, you want to be sure each drive letter is a separate set of physical disks.
This way you look to balance the load between tables and indexes. If you find that you have large queries that often scan 2 tables, you'd be better off separating those tables to separate files (on separate disks), than separating your indexes.
Here are some guidelines: http://msdn.microsoft.com/en-us/library/ms187087.aspx
October 6, 2009 at 8:07 am
repent_kog_is_near (10/5/2009)
SteveI have read some emphasis to take out all clustered index into .ndf; some take all non-clustered into ndf. I guess what you saying is regardless of C or NC, get the heaviest ones out into a seperate file, right?
Not the clustered indexes. The clustered indexes are the tables, so if you move all the clustered and all the nonclustered indexes into a secondary filegroup you'll have the primary filegroup without any tables in it and all the tables and all the indexes in the secondary filegroup. Not very useful.
Separate out frequently used nonclustered indexes into a second filegroup. Make sure that the files in that filegroup are on a different physical drive from the ones that make up the primary filegroup. If they're all on the same drive then there's no gain at all.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 6, 2009 at 8:26 am
Steve
Thanks for looking at the code.
I was able to relate to your recommendation on separating heavy indexes vs scanned tables. So we need to do some good profiling on this before we can implement it. Also the reco on the MS link, especially the last 4 was good.
The code uses "select indid, groupid, name, status, ORIGFILLFACTOR from sysindexes where id = object_id(@OBJNAME) and indid > 0 and indid < 255 and (status & 64)= 0" to get the indexes. It is just moving all the clustered and non-clustered ones at same time, isn't it? Maybe I am wrong, because I do not understand the 'status & 64' part.
Dan
October 6, 2009 at 8:28 am
When you say you want to address I/O performance issues, I assume that you have already done the work on analyzing the I/O demands of your application. if not, it would be a good idea to do that. You can use perfmon to monitor things like disk queue length, % disk time, etc. and actually confirm that it is indeed a bottleneck. It may also be worthwhile running perfmon against the drive that your current mdf file sits on and compare it in real time to the drive that you propose to put the ndf on, in case there is anything else there, and see the high usage of the first vis-a-vis the free capacity on the second. Unless it's a SAN where you have redundant access paths, you want to ensure that the other drive is actually a separate disk or set of disks, and not just another logical partition on the same array.
October 6, 2009 at 8:38 am
Gail,
Yes, I see what you are saying. Better to have clustered in one and NC in another; but even better yet, to just move the heavily used NC to another disk/file.
thx
Dan
October 6, 2009 at 8:38 am
Guru
All your points are well taken. Yes, the DB is heavily hit for I/O bottleneck (we use perfmon-Avg. Disk Queue Length is 27; % disk time 2100; Current Disk Queue Length 18 etc).
Even though the drive proposed for ndf file is almost empty now, i like the idea of profiling that drive also, just in case, SAN is having a problem; but it is confirmed that it is on a seperate drive with seperate spindles etc.. thanks for the idea
Dan
October 6, 2009 at 8:58 am
repent_kog_is_near (10/6/2009)
GuruAll your points are well taken. Yes, the DB is heavily hit for I/O bottleneck (we use perfmon-Avg. Disk Queue Length is 27; % disk time 2100; Current Disk Queue Length 18 etc).
Queue length is hard to interpret on a SAN. What's avg sec/read and avg sec/write like?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 14, 2009 at 8:59 am
Gail
Thanks for your input:
avg sec/read is 29ms (log is 8 ms) and avg sec/write is 8 ms (log is 6ms)
Dan
October 14, 2009 at 9:45 am
The read's a little high on the data. Should be < 10ms. I've seen worse though (far, far worse)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 14, 2009 at 9:47 am
You say avg, but over what period ?
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
October 14, 2009 at 10:04 am
Silverfox (10/14/2009)
You say avg, but over what period ?
Over whatever interval the perfmon samples were done. Default is every sec.
The name of the counter is Physical disk:Avg sec\read.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply