huge free spaces in data file can make query slower?

  • Jeff Moden - Saturday, November 3, 2018 11:41 AM

    Cool2018 - Saturday, November 3, 2018 10:22 AM

    Jeff Moden - Friday, November 2, 2018 9:32 PM

    I'm not sure why you have so much free space for the NDF files.  Something is wrong there or you've measured something wrong or it's used space and not free space.

    You're also going to have to rebuild your log file.  Setting it to a % for growth is a bad thing to do.  If the default size of 1MB was left and you grow by 10%, it will take 73 fragments to grow to just 1GB.  The first 68 growths will each be less than 64MB so thats 68*4 VLFs and the last 5 growths will be >64MB but less than 1GB so that's 5*8 VLFs for a total of 312 VLFs just for the first GB of log file space.  That's way too many for this database.  When you get a chance, change the growth size of the log file to at least 1GB then shrink your log file to as close to 0 as possible and then grow it to 1GB and let nature take it's course from there.

    As for the size of your log file being at 33GB, that's also a bit nuts just like all the free space in your NDF files is a bit nuts.  It probably occurs when you do index rebuilds.  In the FULL Recovery Model, Rebuilds are fully logged.  If you're not doing replication or log shipping, you can switch to the BULK LOGGED recovery model and your index Rebuilds will not only be minimally logged, they'll also run faster.  Don't forget to switch back to the Full Recovery Model if that's where you started out at  Reorgs will always be fully logged regardless of the Recovery Model being used and they use a whole lot more resources than advertised.  I steer clear of Reorgs.

    Dear Jeff,

    Highly appreciated and Thank you so much for your valuable advices. It is really helpful for us.
    For the free space %, I used the below query.

    ;WITH f AS

    (

      SELECT name, size = size/128.0 FROM sys.database_files

    ),

    s AS

    (

      SELECT name, size, free = size-CONVERT(INT,FILEPROPERTY(name,'SpaceUsed'))/128.0

      FROM f

    )

    SELECT name, size, free, percent_free = free * 100.0 / size

    FROM s;


    I will take your advice for the log file too. Really appreciated your advices and time. I have one more confusing here. In order to resolve our mdf fragmentation, only rebuild index and reorg will be sufficient? Is it good if we do physical disk defragmentation too?

    We highly appreciated all your advices. It is very valuable for us.

    Thank you so much and Best Regards,
    Cool

    On the physical disk defragmentation... if you have a SAN, most people will insist that the SAN takes care of disk fragmentation auto-magically.  I don't know if that's actually true or not because I've not been able to play with a SAN to find out.  I do tend to be skeptical but they might be right.

    As for defragging your indexes... if you don't have any dependencies on the log file for such things as log shipping, replication, etc, then, again, my recommendation for index maintenance for this database is...
    1. If your Recovery Model is FULL or Bulk_logged, take a log file backup.
    2. Change the Recovery Model to Bulk_Logged if it's not already there to allow for minimal logging during rebuilds.
    3. Rebuild the indexes that need it.  Since this appears to be a "WORM" table (Write Once, Read Many.  In other words, you truncate it, load it, and use it without modifying the data), you should probably rebuild the indexes if they have <92 avg_page_space_used_in_percent or > 10 avg_fragmentation_in_percent (even at just 10% fragmentation, Reorg can take quite the toll on the log file for large tables).
    4. Change the Recovery Model back to FULL if that's where you started.
    5. Take another log file backup to minimize the amount of log file span that had minimal logging in it (important for Point-in-Time restores).

    Whether the table is partitioned or not and based on the freespace in your MDF and NDF files, your used data size should drop to something less than 12GB.  If you do the minimal logging thing when you rebuild your indexes, you probably won't need more than about 2-4GB except for when you do your data loads.  If you learn how to do minimal logging for those as well as for index maintenance, you might be able to get away with even less.

    And, if I think the actual usage is going to drop to a little as your chart suggests, I'd definitely NOT partition this table.

    Dear Jeff,

    Thank you so much for your time and advices. We highly appreciated. Your advices are really precious for us.
    We will review our log file size and partitioning.

    Thank you so much & Best Regards,

Viewing post 31 (of 30 total)

You must be logged in to reply to this topic. Login to reply