April 20, 2012 at 12:01 am
Comments posted to this topic are about the item The Real World: Fragmented Disk and High PAGEIOLATCH Waits
Br. Kenneth Igiri
https://kennethigiri.com
All nations come to my light, all kings to the brightness of my rising
April 20, 2012 at 2:30 am
Good Job. Thanks for sharing your experience on this performance issue.
M&M
April 20, 2012 at 2:49 am
Interesting article, though I would have appreciated a bit more information to see the while picture, most importantly the external fragmentation of indexes, but also the internal fragmentation. Information about he growth settings for the database files, and whether it has been shrunk (or even autoshrunk) would be nice too.
Now, the rest of this post is meant to give those interested a bit of extra resources, and possibly explain why the fragmentation (both of indexes and file system) could cause issues.
I may be wrong, but I would expect your issues to be caused by a high external fragmentation[/url] causing a lot of seek back and forth in the database file, possibly combined your disk fragmentation causing a lot of entries in the NTFS MFT[/url] (Master File Table) and most importantly causing a loot of seek on the disk system. Other factors possibly affecting your performance would be incorrect partiton aligmnent, and the default 4K allocation unit (instead of 64k), both of which could cause a single logical read to become multiple physical reads.
Well, I hope this information is useful for someone đ
April 20, 2012 at 7:25 am
and the default 4K allocation unit (instead of 64k), both of which could cause a single logical read to become multiple physical reads.
Is it possible to check the allocation unit using SQL ServeR?
M&M
April 20, 2012 at 8:14 am
Thanks so much Kristian. this is very useful.
I know one can adjust the fill factor to deal with internal fragmentation, what can one do about external fragmentation?
In addition, we use a shared storage area (Compellant) which is supposed to be self tuning. Do we need to re-align? How can this be done in an already existing setup?
Br. Kenneth Igiri
https://kennethigiri.com
All nations come to my light, all kings to the brightness of my rising
April 20, 2012 at 8:36 am
Did you grow your database file before the physical defragmentation to prevent future fragmentation on the volume for the database files? What was the down time? Was another defragment tool utilized that could do this while the database was up and had throttling control?
Also curious for those knowing the answer just for verification purpose but: if you have two files and each one has 1 split, then isn't the volume at 100% file fragmentation? A database file would then show as completely fragmented? Just curious if anyone knows this one.
Example:
Two files take up an entire volume. Each file is split once.
On disk the files are written like this:
Half of file A | Half of file B | Second half of file A | Second half of file B
100% Disk fragmentation (assuming the files take up the entire disk) and 100% file fragmentation?
April 20, 2012 at 10:12 am
Hello Matt,
I am not sure I know what you mean by growing th database but I did not perform any activity to deliberatly grow the database.
Amazingly, no dwontime was recorded. The defrag took about three hours. I do not know whether its important but the server is a virtual machine.
In my case, the DB file had over 400 fragments as reported by Disk Defragmenter. I used Windows Defragmenter for this task.
Br. Kenneth Igiri
https://kennethigiri.com
All nations come to my light, all kings to the brightness of my rising
April 20, 2012 at 10:38 am
Was referring to setting the "initial size" on the database to a larger number than it currently is from database properties in SSMS. Since the database had roughly 50% free space but 400 fragments I was curious what the auto growth history was like and if any padding was added manually to prevent the growth which lead to problems. Possibly someone grew the file and that started the decline in performance or an auto grow occurred and that started the performance problem.
April 20, 2012 at 10:53 am
Did you consider a third party disk deframenter like PerfectDisk? It does not have the free space requirements of the Windows defragmenter.
I have a worse horror story: One day our performance went through the floor and it was all due to super long disk io waits. Turned out the server administration people installed a new service monitoring package on all the servers that wrote to an MS Access file 60 times per second, inserts, updates, and deletes. It was only a few megabytes but had scattered itself to over 100,000 fragments. So the poor disk was having to dig through 100K fragments 60 times per second nonstop AND do the regular SQL Server disk IO as well.
April 20, 2012 at 11:24 am
kennethigiri (4/20/2012)
I know one can adjust the fill factor to deal with internal fragmentation, what can one do about external fragmentation?
Defragment or even rebuild the index to reduce external fragmentation.
kennethigiri (4/20/2012)
In addition, we use a shared storage area (Compellant) which is supposed to be self tuning. Do we need to re-align? How can this be done in an already existing setup?
The SAN may be self-tuning, it will not affect offset and allocation unit issues however. Now, there are more factors to take into consideration as well, most important is the block (aka stripe) size of your SAN. For large blocks, as for instance 4MB, the performance issues related to partition offset are barely noticeable, but for a small block size as for instance 128kB is way worse. But to the point, what can you do about it? To adjust the allocation unit size you'll have to reformat the partition, to adjust partition alignment you'll have to delete and recreate the partition. Thus, My general recommendation in a SAN environment would be to present a new LUN to the server, partition and format it correctly, then move the data files. Yes, it will include downtime, but at least you have to move the data only once. By the way: Partitions CREATED by Windows Server 2008 and later has correct partition alignment (1MB), whereas Windows Server 2003 and earlier creates them with 31.5kB alignment, which is not optimal.
April 20, 2012 at 11:29 am
We definitely did not do that. The performance problems grew over time.
Br. Kenneth Igiri
https://kennethigiri.com
All nations come to my light, all kings to the brightness of my rising
April 20, 2012 at 11:31 am
magarity kerns (4/20/2012)
Did you consider a third party disk deframenter like PerfectDisk? It does not have the free space requirements of the Windows defragmenter..
Will check that out. Cos I may have to do another defrag soon.
Br. Kenneth Igiri
https://kennethigiri.com
All nations come to my light, all kings to the brightness of my rising
April 20, 2012 at 11:44 am
kennethigiri (4/20/2012)
We definitely did not do that. The performance problems grew over time.
What settings have you got for growth of database files. Sounds to me that they grow by too little each time. Gain control of the growth of your databases, and you will have much less problems with fragmentation.
April 20, 2012 at 11:46 am
Is any of the defrag you are doing going to a volume that contains the TempDB database and if so, what is the initial size of the TempDB database set at in the properties?
Viewing 15 posts - 1 through 15 (of 53 total)
You must be logged in to reply to this topic. Login to reply