August 14, 2011 at 1:22 pm
Hello - I am migrating several SQL 2000 dbs to a SQL 2008 Standard x64 platform. I am following the 'best practice' recommendation from MCTS training kit to put system tables for each db on PRIMARY filegroup and data on secondary DATA filegroup.
At this point my method is to create a blank db with the mdf, ndf and ldf files, then set the DATA filegroup as default, then migrate the SQL 2k db to a staging area, set compatibility to SQL 2008, then do an import-export package to bring in all tables from the staging area into the new db where they all get put into the DATA filegroup and the ndf file.
Does this seem like the right way to go about this? It seems to work OK although I get the odd warning 0x80049304 - I have checked and shared memory is enable on the protocols and I have admin privileges onthe box so I don't know if that means anything.
Also btw, I am doing this on the recommendation in the MCTS book that this provides I/O isolation to the system tables in each db. If that's not really a good reason then I might just skip this but if it makes sense i will continue.
Any feedback is appreciated - thanks!
August 14, 2011 at 2:49 pm
IO Isolation is a very poor reason for that. System tables don't get high IO load.
The more sensible reason for going that route is so that you can do file/filegroup backups and piecemeal restores to get a DB up and running fast after a disaster. Generally, if going that route one would have a DB structure of the following general form
Primary filegroup - system tables only. This has to be restored first any time the DB is restored
Data filegroup 1 - Critical tables for your app. This would be restored second in order to get the app up and running as fast as possible
Data filegroup 2 - less critical tables that could be restored after the critical ones are up and running
Data filegroup 3 - historical data that could be restored sometime later.
For this though, you must understand how the app is used and what the most critical pieces are.
You can also split a database into multiple filegroups for IO performance, but that requires splitting sets of tables from each other (not system from everything else), separate physical IO channels and an DB that is IO bottlenecked to really be useful.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 14, 2011 at 2:50 pm
And if you are going to go that route, just restore the 2000 DB to 2008, then create secondary files/filegroups and rebuild the indexes you want to move onto the new filegroup (moving the cluster moves the table), then a once-off shrink of the primary once all the non-system tables have been moved elsewhere.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 15, 2011 at 8:05 am
Thanks for the input and advice. In my case, the application is a 3rd party tool. I think I pretty much need all of it to be available at the same time, so the argument of using filegroups for piecemeal restore doesn't really work, as it might for other apps.
Also, I am keeping all the data files on a single RAID5 disk and the logs on a separate RAID10, so I won't see the benefit of putting different filegroups on different disks.
With that in mind and the extra steps involved, and if there's not much of an IO benefit, my thinking is to just leave out the secondary filegroups and let the data all be on PRIMARY on one disk and the logs on the other.
If you see something that drives for a different approach let me know otherwise that is how I will go.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply