April 20, 2011 at 12:25 pm
Hello everyone,
Background information:
We are using SQL Server 2008 R2 with transaction log backups occurring every 15 minutes and differential backups every eight hours. I have "grown" the database log file to 104 GB (will probably have to increase it). We have asynchronous database mirroring to a remote location.
I am thinking of a least painful way of moving a 300 million record table into a separate FileGroup/data file. The command I was thinking of using is:
ALTER TABLE [large_table] DROP CONSTRAINT [PK_large_table_prevName] WITH (ONLINE = ON);
ALTER TABLE [dbo].[large_table] ADD CONSTRAINT [PK_large_table_newName] PRIMARY KEY CLUSTERED
(
[Date] ASC,
[ID] ASC
)
WITH
(
DATA_COMPRESSION = PAGE,
SORT_IN_TEMPDB = ON,
ONLINE = ON,
MAXDOP = 6 -- Total cores: 12
)
ON [Different_FileGroup];
The reason I'm not using DROP_EXISTING is because I wanted to change the index name.
The areas I'm worried about is the transaction log/differential backups file size, the database mirror queue and the time it takes to synchronize, and finally, the actual time moving the table onto the new FileGroup.
Can someone please assist me if there is a "less" painful way to move the table onto the new FileGroup?
Thank you in advance!
April 20, 2011 at 12:29 pm
dajonx (4/20/2011)
Can someone please assist me if there is a "less" painful way to move the table onto the new FileGroup?
You could create a new table, insert the rows over in batches, drop the old, rename the new, recreate all constraints, keys, indexes.
There's no painless way. It's going to hurt log and mirroring. Above will take longer, but will spread out impact on log.
Your choice.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply