May 22, 2018 at 3:40 pm
I have inherited an approximately 800GB SQL 2016 database (recently migrated) with 35 data files ranging in size from 150GB to 3GB, all in 7 different file groups. Because of this layout of data files, there are hot spots on the data files and the hyper-converged storage infrastructure can't keep up with the I/O requests. This database is not using SQL compression but TDE is being used on this database.
What I would like to do is refactor this database to 24 data files of the same size, all in the same file group with SQL page level compression enabled, to comply with the hyper-converged vendor's best practices. I know there are identity columns, foreign keys to take into account and other interesting impacts of doing this.
My question is what is the best way to migrate the data from the existing database to a new database?
Import/Export?
Generate Scripts?
Select * from old DB to new one?
Something else I am not thinking of?
Thanks in advance for any suggestions and comments.
-n
May 22, 2018 at 7:32 pm
Rebuild indexes on the new partition schema? https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql?view=sql-server-2017
May 22, 2018 at 9:42 pm
nsunderman - Tuesday, May 22, 2018 3:40 PMI have inherited an approximately 800GB SQL 2016 database (recently migrated) with 35 data files ranging in size from 150GB to 3GB, all in 7 different file groups. Because of this layout of data files, there are hot spots on the data files and the hyper-converged storage infrastructure can't keep up with the I/O requests. This database is not using SQL compression but TDE is being used on this database.What I would like to do is refactor this database to 24 data files of the same size, all in the same file group with SQL page level compression enabled, to comply with the hyper-converged vendor's best practices. I know there are identity columns, foreign keys to take into account and other interesting impacts of doing this.
My question is what is the best way to migrate the data from the existing database to a new database?
Import/Export?
Generate Scripts?
Select * from old DB to new one?
Something else I am not thinking of?Thanks in advance for any suggestions and comments.
-n
What are the key columns involved on the hotspots?
--Jeff Moden
Change is inevitable... Change for the better is not.
May 23, 2018 at 12:38 am
This was removed by the editor as SPAM
May 23, 2018 at 9:43 am
why only page compression, where is compression best practice ?
May 23, 2018 at 10:12 am
What are the key columns involved on the hotspots?
--Jeff Moden
* I am not sure what key columns are involved on the hotspots at this point, only which files are generating the bulk of the I/Os. I will search up some info on this and let you know what I find out.
May 23, 2018 at 10:14 am
why only page compression, where is compression best practice ?
In section 7.15 of the Nutanix Best Practices Microsoft SQL Server 2014 it recommends:
* Carefully evaluate using SQL compression for all databases
* Look at using the page-level compression option
-n
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply