October 13, 2009 at 12:54 am
Dear Friends
I have a database which is 30GB. At the moment data and the log file points to one drive. The customer is now providing us with two extra drives D: and E:
So i am planning to move the log file to the D:drive and the data file to the E:drive.
I just needed more information on secondary data file. If i want to create another secondary data file then will i be able to point my Less frequently used tables to the secondary data file and let the other tables point to .mdf. Can this be achieved? If Yes, how can i achieve this?
Many thanks.
October 13, 2009 at 1:00 am
sarvesh singh-457805 (10/13/2009)
Dear FriendsI have a database which is 30GB. At the moment data and the log file points to one drive. The customer is now providing us with two extra drives D: and E:
So i am planning to move the log file to the D:drive and the data file to the E:drive.
I just needed more information on secondary data file. If i want to create another secondary data file then will i be able to point my Less frequently used tables to the secondary data file and let the other tables point to .mdf. Can this be achieved? If Yes, how can i achieve this?
Many thanks.
I would say create a new filegroup and put your secondary datafile .ndf in there and you can do it that way.
You can either re-create the tables, specifying the new filegroup or recreate the clustered index on the table and specify the new filegroup, this will effectively move the table to the new filegroup.
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
October 13, 2009 at 3:53 am
Hello Sarvesh,
If your database is not on production or if you can manage little downtime on weekend. I suggest detach and attach would be better option to move your data and log file on the different drive.
Now about moving objects to the secondary data file you check "Alter Table" with "Move To" opotion. Another thing is you can use SSIS (DTS) to move your objects by creating the new objects on the secondary filegroup and than transfering the data through SSIS.
HTH
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
October 13, 2009 at 4:11 am
free_mascot (10/13/2009)
Hello Sarvesh,If your database is not on production or if you can manage little downtime on weekend. I suggest detach and attach would be better option to move your data and log file on the different drive.
Now about moving objects to the secondary data file you check "Alter Table" with "Move To" opotion. Another thing is you can use SSIS (DTS) to move your objects by creating the new objects on the secondary filegroup and than transfering the data through SSIS.
HTH
Good suggestion by free_mascot, the only thing and I dont mind being corrected if i am wrong, is that using the "alter table" with "Move to" option, you have to specify a different filegroup. it will not work if your mdf and ndf files are in the same primary filegroup.
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
October 13, 2009 at 4:47 am
Thank you guys for your responses
I have only got one day of downtime. Some of the tables have got more than a million rows. This might take a lot of time transfering the data and may not finish in a day's time if i follow the path create a filegroup - create object - transfer the data accross through SSIS it may not finish on time.
I think Silver's suggestion suits my scenario which is
"recreate the clustered index on the table and specify the new filegroup, this will effectively move the table to the new filegroup."
I am not sure how this works? Does this mean that the data will get transfered as well? or does this mean that only new data will go to the secondary data file/ Filegroup.
October 13, 2009 at 5:01 am
Yes, You are verymuch correct Silverfox.
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
October 13, 2009 at 5:05 am
from BOL
Because the leaf level of a clustered index and the data pages are the same by definition, creating a clustered index and using the ON partition_scheme_name or ON filegroup_name clause effectively moves a table from the filegroup on which the table was created to the new partition scheme or filegroup. Before creating tables or indexes on specific filegroups, verify which filegroups are available and that they have enough empty space for the index.
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
October 13, 2009 at 5:46 am
The database has got one Primary Filegroup. I have created another filegroup called 'Filegroup_1'. I have put .ndf on Filegroup_1.
Which do you think will be more efficient to transfer the tables to the filegroup?
ALTER TABLE Test DROP CONSTRAINT PK_test1 WITH (MOVE TO Filegroup_1)
GO
or
Drop constraint and then recreate the constraint:
ALTER TABLE Test ADD CONSTRAINT PK_Test1 PRIMARY KEY(Filegroup_1)
GO
October 15, 2009 at 3:44 am
Just to add to my previous point.. Can creating a partition work in this scenario..There's about 16 million rows in one table. i am just worried that it won't finish on time with only 1 day downtime.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply