June 3, 2021 at 8:02 am
Hi to all,
Dealing with an old reporting-tool its is prompting an advice to split large databases in 64GB chuncks.
I found multiple article about splitting up large datafiles but this not always increase performance.
found on
Files per database 32,767
File size (data) 16 terabytes
File size (log) 2 terabytes
Found on
http://www.sqlskills.com/BLOGS/PAUL/post/Benchmarking-do-multiple-data-files-make-a-difference.aspx
an (old) test about performance using multiple data files.
I just wonder if there is a global recommandation on the size of MDF and NDF files (in relation to Spinning disk/RAID and SSD)?
curious about your reply....
Guus Kramer
The Netherlands
June 3, 2021 at 10:03 am
I don't know of any official recommendation.
A practical reason could be that it is faster to relocate a 64 GB file than a 2TB one
There is an recommendation to have multiple tempdb-files regarding creating/dropping tempobjects https://docs.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database?view=sql-server-ver15
June 4, 2021 at 9:39 am
Hi,
I think it depends on your workload. If you can cluster your database in meaningful ndf files, you can win some performance.
But it would not be the best practice to split every database to 64GB ndf files in my opinion.
Kind regards,
Andreas
June 6, 2021 at 12:41 pm
My take is that I'd worry more about splitting tables in a logical fashion that would require no index or statistics maintenance on data that no longer changes.
You might also want to partition tables (and NCI's) so that if a piece of it goes corrupt, you can do a piecemeal restore of a much smaller piece rather than the whole and you can probably even be able to do it online.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply