February 13, 2008 at 10:37 am
We have a repository database using one file and it's about 400GB. We have added another 800GB free SAN space to another LUN. Can I use the full 800GB to create another database file or should I keep the file sizes to 400GB? Has anyone had any issues with files over 400GB database files in SQL 2000 SP4?
The CheckDB and other routine maintenance tasks take a long time on a 400GB database.
thanks,
Dana
February 13, 2008 at 10:57 am
We haven't got issues with databases of 100 gb, although we are thinking about data management to migrate readonly historical data. One of the side effects will be keeping the live database smaller.
February 13, 2008 at 12:20 pm
400 GB definitely is a big size. We have 3 x 150 GB files but have no issues with it and forth we created last week. Having multiple files will help you in maintenance. Try to create multiple data files with as much small size your disks can afford.
SQL DBA.
February 13, 2008 at 12:28 pm
So you are suggesting that we create 4x200GB files in the SAN LUN that the systems group created. How about 2x400GB?
The LUN is 800GB.
February 13, 2008 at 12:37 pm
Dana Bibeault (2/13/2008)
So you are suggesting that we create 4x200GB files in the SAN LUN that the systems group created. How about 2x400GB?The LUN is 800GB.
Depends. In first place I would have never allowed primary file to grow that big but in your case since its already grown that big, 2 x 400 would also work. Can you make different files in maintenance window and moved data to other files? just a thought. with 400 GB files its hard to keep track of how the data is filling in this file and how much space is left.
SQL DBA.
February 13, 2008 at 12:51 pm
OK, the current LUN is LUN D and has a primary database file of 400GB.
We created a new LUN, LUN F with 800GB free space. (RAID5 - not the best for querys and reporting)
The database is a Repository to hold data from 10 years back and forward. We intend to extract subsets onto another server for reporting.
Is it best practices to keep multiple files at lets say 200GB each. Will the multiple smaller files speed up queries?
What do we do the 400GB file after set the default to the new file? Reindex all the tables and make it read-only?
I am looking for suggestions for maximum performance.
February 13, 2008 at 2:05 pm
Since it is a SAN storage built on RAID 5, you may not gain better performance whether or not you use either a single data file or multiple. However, you may benefit from backup/restore if you have multiple data files.
February 13, 2008 at 7:56 pm
With db size that big, you may also want to consider offloading checkdb execution on your online production database, by restoring your backups on a separate machine and running checkdb against it.
Also, pull out your indexes from the primary and move it to a new filegroup. If you have historical data, you can also move it to a different filegroup.
_____________
Donn Policarpio
February 14, 2008 at 5:06 am
I do not know how to pull out indexes and put them on a new filegroup.
Re: Backups. Once a file is not used anymore it can be reindexed one last time and backed up once. My Backup process will need to change to backup filegroups not the entire database. This will complicate recovery, Yes?
February 14, 2008 at 8:19 am
Right, filegroup backups are not as straightforward. 🙂
_____________
Donn Policarpio
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply