December 2, 2008 at 9:00 am
Hello,
I have a database that’s data file is split across various drives D:\ E:\ F:\ G:\ and H:\. All of the drives have plenty of capacity except one drive the F:\ DRIVE, which has about 10 gb left which is likely to run out very soon. We have room to put an additional drive in, but is there a way to stop the F:\ Drive from growing and diverting the new growth to the new drive?
If not how could we transfer the database solely to the new drive? A backup and restore will simply replicate the setup we have at the moment.
December 2, 2008 at 9:18 am
It sort of depends on your uptime requirements I guess. You can use backup & restore and use the WITH MOVE clause to move your files around. So you could take a full backup, drop the database, then restore using WITH MOVE to move and reallocate your files.
You could also create filegroups/files and rebuild indexes or move tables without dropping the database. If you post your file layout and sizes then it may be easier to see what you should do.
December 2, 2008 at 9:37 am
This is the details of the data files:
File Size Space Left
D:\46 GB 130 GB
E:\64 GB 40 GB
F:\61 GB 11 GB
G:\4 GB 40 GB
A high level of uptime is required with this application, so I would prefer not to do a backup and restore if there is another way.
December 2, 2008 at 9:52 am
Are the data files all in the same filegroup or are they in different filegroups?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 2, 2008 at 9:58 am
They are all in different filegroups
December 2, 2008 at 10:04 am
With strict uptime requirements, this may get a little touchy. Here's a blog post that you may find helpful though. If it's one certain table(s) or index(es) that are bloating up, you may be able to move those with less pain.
http://decipherinfosys.wordpress.com/2007/08/14/moving-tables-to-a-different-filegroup-in-sql-2005/
December 2, 2008 at 10:09 am
Assuming I could convince the business that some downtime was required, is there a better way to do this ?
December 2, 2008 at 10:22 am
Again - it depends on the schema definition. A simple example that would reduce downtime.....a lot of simplistic assumptions are made here....
Say for instance that you have one particular table that is huge and filling that drive. Say it's referenced by 5 stored procedures. Assuming there are minimal foreign key or index issues, you could...
1 - create new filegroup/file structure utilizing your new disks
2 - create new empty table on that new filegroup
3 - bcp/ssis your data from table1 to table2
4 - modify the 5 stored procedures to point to new table
You would have to of course verify permissions were scripted out and carried over, and somehow ensure no data updates occurred between when the data copy started and the procedures were rerouted.
Another option is to look into filegroup restores - that may be simpler.
Here's another link that could help.
December 2, 2008 at 10:40 am
scb (12/2/2008)
Say for instance that you have one particular table that is huge and filling that drive. Say it's referenced by 5 stored procedures. Assuming there are minimal foreign key or index issues, you could...1 - create new filegroup/file structure utilizing your new disks
2 - create new empty table on that new filegroup
3 - bcp/ssis your data from table1 to table2
4 - modify the 5 stored procedures to point to new table
Or you could replace steps 2-4 by just rebuilding the clustered index (assuming the table has a cluster) onto the new filegroup. The cluster is the table, so moving the clustered index moves the entire table.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 2, 2008 at 10:43 am
Good point Gail.
That's covered at this link jabba if you need it...
http://decipherinfosys.wordpress.com/2007/08/14/moving-tables-to-a-different-filegroup-in-sql-2005/
December 2, 2008 at 10:45 am
A tad cunning, but it does work and allow for the minimum downtime.
Defining database as DB_A
Enjoy the respect of your colleagues.
December 2, 2008 at 11:03 am
Nice! Forgot about the ole log shipping. You can do a similar function with mirroring, but log shipping is probably a bit easier.
Actually you wouldn't even need to do log shipping right? Just straight up backup/restore if you wanted, with a well timed transaction log backup. And you could even set DB_A to read only if you wanted, and select operations could continue during the move.
December 2, 2008 at 11:45 am
scb (12/2/2008)
Nice! Forgot about the ole log shipping. You can do a similar function with mirroring, but log shipping is probably a bit easier.Actually you wouldn't even need to do log shipping right? Just straight up backup/restore if you wanted, with a well timed transaction log backup. And you could even set DB_A to read only if you wanted, and select operations could continue during the move.
You sure could, if they'll allow read only access for that period (my environment they wouldn't and it takes a long time to backup and restore a 1TB database :w00t: )
December 3, 2008 at 3:48 am
Thank you all very much for your brilliant replies. One final question I have had a look around but cannot find how I determine which objects (tables etc) are in which filegroups?
December 3, 2008 at 4:45 am
Here's a query that should work
This will work providing you have no table partitioning. If you do, it may give strange results
select object_name(p.object_id), ds.name as FilegroupName
from sys.partitions p
inner join sys.allocation_units au on p.partition_id = au.allocation_unit_id
inner join sys.data_spaces ds on au.data_space_id = ds.data_space_id
where index_id in (0,1)
This gives the location of the heap/clustered index. Other indexes may be on other partitions.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply