August 7, 2006 at 4:43 pm
Here's the scoop...
We have 5 identical clusters Win2K sp4, SQL2K sp3a the cluster in question is on a Dell Power Edge SAN. All of our clusters have around 20-30 databases which are identical images, containing data for different customers ranging in size from 1 GB up to 11 GB.
Only one cluster is running incredibly slow during our nightly processes, this particular cluster does not even house the largest databases, and performs the same nightly processes as all other clusters. Optimizations and backups run terribly slow, affection our other imports and jobs.
Our network admin found that this server is extremely fragmented, however upon a defrag test run on our Log drive, we found that this could not be accomplished during our maintenance window.
We've thought of adding another drive to each cluster, which would be used to migrate files from a fragmented drive to this one while we defrag the other. This process would be scheduled for each cluster either annually or bi-annually depending the performance benefits gained.
Of course, we would have to format the "spare" drive each time, prior to moving files and the preparation would take quite a bit of time. I'm wondering if anyone out there running a similar environment has found a good solution for defragging SAN servers?
Any suggestions would be greatly appreciated!
August 9, 2006 at 6:50 am
You may want to explore diskkeeper or a similar product. It will allow you to schedule defrags or run when the CPU is idle.
Either of those will allow you to get that FS defragged within your maint window, though you wont be able to do it all at once it is much less painful and intrusive than moving files or filegroups around and formatting san partitions. (even only 2x a year)
They are also 30 day fully working trials, so no AP expense request or IT budget accomodation needed!
August 10, 2006 at 2:41 am
Megan
Might be better to treat the cause rather than the symptoms... You can avoid fragmentation in the first place by setting you log files as large as they are ever going to need to grow and then leaving them. In other words, don't shrink or autoshrink any database or log files. This will have the additional advantage of not consuming server resources when the files need to grow again.
John
August 10, 2006 at 4:46 am
August 10, 2006 at 4:51 am
John has a very good point. You should look to avoid file system fragmentation by setting appropriate database file sizes and growth increments. Also, very important, avoid regularly shrinking your database files as this results in excessive file system fragmentation and poor performance.
Even in SAN kit that does not store your data contiguously, it is important to avoid file system fragmentation because there is a definite Windows overhead in dealing with badly fragmented files.
It is important to size data files, logs and tempdb to be realistic for your needs. If you know a database will grow from nothing to 20 GB in a year, give it an initial size of 20 GB, and growth of perhaps 1 GB. You need to design your database growth strategy to minimise fragmentation.
Once you as a DBA have taken action to prevent future fragmentation you can look at fixing what you have. The built in Windows defrag tool can be used while SQL Server is running (I do this on our Dev boxes from time to time). For a large disk that is badly fragmented, it may take a number of defrag runs to get everything optimal. The post about Diskeeper is good advice if you want regular defragmentation.
Also do not forget the C: drive. A standard Windows install on an empty C: drive leaves it badly fragmented. A lot if Infrastructure people don't look at this, but improving C: drive performance will help everything that runs on the box.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
August 10, 2006 at 9:10 am
Since I've been here, I've made sure that all db's are sized appropriately on creation. However, this particular server was brought on line and filled to capacity with db's long before I arrived here. We're going to look into disk keeper...I'll be sure to post my findings
August 17, 2006 at 7:48 am
If possible you can backup the databases on the drives that are fragmented, delete the databases from the drives, defrag (it'll take seconds if the drive is empty) and then restore from the backups. This should take less time than trying to defrag while the system is using the disks. And the restore should force Windows to write the database in a contiguous chunk.
We've done this before on databases in the 50-250GB+ range and it seems to work well however we don't have clusters so I'm not sure how that would work for you.
37SOLUTIONS
We'll find the solution that's right for you!
Website Hosting | Website Design | MSSQL/MySQL Hosting | Database Development | Research/Consulting
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply