May 20, 2010 at 9:27 am
I have a large table with 6 indexes, and there are 2 FileGroups for the database.
The database is in VMWare on a SAN, so all the Drives on the server (D, E, F) are really just logical drives on the SAN, so I don't think there's really a benefit to separating data & indexes the way there is on physical disks.
If my data is in the PRIMARY file group on E, and the 6 indexes are in a different filegroup INDEXES on F, and then I move 3 of the indexes back to PRIMARY on E, would that likely have a performance impact ? Does SQL care if related indexes are in different file groups ?
May 20, 2010 at 9:37 am
It depends.
Hate to say it, but it does. Are the SAN LUNs separate physically? Have to check with the SAN guys. Also, it's possible that you might get some performance difference from the separate paths and connections you have. Not sure it would be a big difference, and it depends on your load as well.
Does the separation matter for DR? Is it easy to get one drive with that space if you need it? Or will you need separate ones? That's the big reason I've seen outside of performance for separating things out.
May 20, 2010 at 9:49 am
They were separated out initially for performance when the database was on a stand alone server with separate internal physical disks. After migrating to VMWare, the setup was unchanged and migrated to the logical drives, since there didn't seem to be any point to changing anything.
The SAN LUNS are not separated physically ... wish they were, but the architecture of this model doesn't work that way, so I was told.
Recently, the drive that has the INDEXES filegroup was getting low on space, and is used by another database also, worrying me that I might fill it up.
So I:
Moved some indexes back to PRIMARY on a different drive.
Shrunk the INDEXES filegroup to recover space
Rebuilt the affected indexes
Should I have updated statistics ?
May 20, 2010 at 10:03 am
May 20, 2010 at 10:32 am
I think my performance issue is from another cause.
I'm doing an update to a master table with 80 million rows, based on a join to a work table with 300,000 rows. When the work table had 100,000 rows, the execution plan shows a nice index seek to the master table. When I added 200,000 rows to the work table, I'm now getting a table scan on the master table, bumping execute time from 2 sec to 4 min.
This job then impacts other jobs.
Seems like a pretty small change to trigger a switch from index seek to table scan !?
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply