October 13, 2008 at 9:57 am
wjones21 (10/13/2008)
No, not really any I/O bottlenecks.So, it's perfectly OK to eventually have a 1 TB single data file?
Usually before that point people will start splitting up, for recovery purposes. So you can get filegroup backups, partial database availability and all those nice things.
That requires multiple filegroups though, not just multiple files.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 14, 2008 at 4:30 pm
GilaMonster (10/13/2008)
You can't do a shink file with empty on the primary file.
Actually, you can do a shrinkfile with empty on the primary data file. You can shrink it down to about the size of the system tables. You just can't remove the primary data file.
You wouldn't use shrinkfile to move data..
If you needed to move a data file to a different LUN without wanting to move between filegroups:
Create a new data file on the different LUN, in the same filegroup. Then use shrinkfile with emptyfile to move the data to the new data file. This allows the database to remain available also.
It is feasible but probably not the best way of doing things to create new data files and use shrinkfile.
June 2, 2009 at 8:06 am
I am working on the same issue right now. I want to test the impact on performance from striping the primary file group over more than one physical disk.
If I cant remove the primary file (mdf) I guess what I have to do is:
1) Create 1 temp ndf
2) Empty the mdf
3) Create the actual ndfs that I need
4) Empty and delete the temp ndf.
This should result in evenly distributed data. Is there no better way?
I am also testing to distribute the tables to separate filegroups.
//J
July 20, 2009 at 8:17 am
I was trying to do something very similar- split up data file for third party app, without going to the table/index level. I added 3 new data files, each less than half as big as the single original .MDF; I then lowered the "Initial Size" of the .MDF to the same size as the .NDFs. By all appearances, SQL shrunk the .MDF (now 20% free at the new lower size), and redistributed the data into the new .NDFs.
Hope this helps! (I understand the point made below about how if the data is all located on the same drive, it's probably the single spindle that's your bottleneck, not the single I/O thread. What the heck- it can't hurt, right?)
Viewing 4 posts - 16 through 18 (of 18 total)
You must be logged in to reply to this topic. Login to reply