All disk space used, after deleting rows the table has 6GB "unused"space but I cannot release it or shrink the mdf.

  • 1. In what order are you doing the DELETEs?
    2. How many rows do you intend to delete?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • As a bit of a sidebar and personal observation, I guess disk space isn't cheap after all. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • "Your questions certainly do sound like a beginner issue because you are looking for some magic to fix your issue.  There is no magic. "

    If the answer of jcourt is indeed working, that is indeed a magic trick.

    The question is WHY these deletions don't free up space.
    Your "expert" answer is just a way around and I agree it is a good one but that does not make you the expert and me a beginner.
    Not preferable because of slow network / no idea of the free storage space in the network...
    It is not our network, we just did the initial installment of SQL and are remotely troubleshooting the issue caused by a bug in a SQL Client application. But if we really can't find out why this delete doesn't free up space, it will be the only alternative left.

    We are deleting in batches of about 5000 rows a minute using a TOP 5000... WHERE right(varchar, 20) ...clause. 75% should be deleted.

  • Jeff Moden asked a question. Do you understand why he asked this question?
    Multiple people suggested that you re-index, which you can't do. Do you understand why this was suggested?

    Even though you are deleting 5000 rows at a time, the rows are likely not being deleted from contiguous data pages. As an example, there may be 100 rows in each page.  The delete may be taking a row from page 1, page 1000, page 5000, and so forth.
    The data pages are still full, even though there is technically free space in each page. 
    Re indexing will reallocate the rows, which will reduce the number of pages, and you will finally see free space. 
    So, I just told you why what you are doing is probably not working. 

    This seems like a system down, emergency situation. Especially from your attitude. 
    The suggestions so far are reindex, which you cannot do, Move the data out, and recreate the table, which you also cannot or will not do.
    Now, for the third idea, expanding upon Jeff's question.
    What is the clustered index on this table? Delete the rows in the clustered index order. I can't tell what the clustered index is from your picture, so I can't give you any direction there. If you post the actual create table statement, and include the indexes, we may be be able to give you some better help.
    Fourth idea.
    Drop all of the indexes, and then run your delete in the same manner. Shrink the database, and re-create the indexes.

    Your delete statement, again, we can't be sure because you won't post real code, looks like it's non-sargable.  Please post the actual DDL of the table, indexes, and your delete statement.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Michael L John - Friday, February 10, 2017 4:51 AM

    Jeff Moden asked a question. Do you understand why he asked this question?
    Multiple people suggested that you re-index, which you can't do. Do you understand why this was suggested?

    Even though you are deleting 5000 rows at a time, the rows are likely not being deleted from contiguous data pages. As an example, there may be 100 rows in each page.  The delete may be taking a row from page 1, page 1000, page 5000, and so forth.
    The data pages are still full, even though there is technically free space in each page. 
    Re indexing will reallocate the rows, which will reduce the number of pages, and you will finally see free space. 
    So, I just told you why what you are doing is probably not working. 

    This seems like a system down, emergency situation. Especially from your attitude. 
    The suggestions so far are reindex, which you cannot do, Move the data out, and recreate the table, which you also cannot or will not do.
    Now, for the third idea, expanding upon Jeff's question.
    What is the clustered index on this table? Delete the rows in the clustered index order. I can't tell what the clustered index is from your picture, so I can't give you any direction there. If you post the actual create table statement, and include the indexes, we may be be able to give you some better help.
    Fourth idea.
    Drop all of the indexes, and then run your delete in the same manner. Shrink the database, and re-create the indexes.

    Your delete statement, again, we can't be sure because you won't post real code, looks like it's non-sargable.  Please post the actual DDL of the table, indexes, and your delete statement.

    Thanks.

    I think we have enough information now. The hard disk is extended virtually and we will try rebuild the index using the addtional space.

    Didn't want to have an attititute but it is just weird to get answers pop-up in your mailbox that are about shrinking and reindexing, while I already mentioned those methods in my first post. Indeed the story about the pages makes sense and using the network to move data was also a good alternative.

    Thank you all and have a nice day.

    P.S. sp_spaceused 'tablename', 'true' did not fix it.

  • syen.j - Thursday, February 9, 2017 12:46 PM

    Perry Whittle - Thursday, February 9, 2017 10:03 AM

    rebuild the indexes on the events table and then check the free space in the mdf

    We cannot rebuild because we are out of space. Rebuilding takes space. Reorganise is possible but it did not fix anything.

    Please before posting answers, read the original post please, it mentions we already tried the basic stuff like delete rows, shrink, reorganise and rebuild (impossible). With no result. the database is fully unused and static. We can see the 'table rows' decreasing and the 'unused space' increasing. Size stays the same.

    in this case a rebuild is required, if you dont have the space for basic house keeping then you're in trouble from the start.
    No need for the attitude :Whistling:

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • syen.j - Friday, February 10, 2017 5:20 AM

    Michael L John - Friday, February 10, 2017 4:51 AM

    Jeff Moden asked a question. Do you understand why he asked this question?
    Multiple people suggested that you re-index, which you can't do. Do you understand why this was suggested?

    Even though you are deleting 5000 rows at a time, the rows are likely not being deleted from contiguous data pages. As an example, there may be 100 rows in each page.  The delete may be taking a row from page 1, page 1000, page 5000, and so forth.
    The data pages are still full, even though there is technically free space in each page. 
    Re indexing will reallocate the rows, which will reduce the number of pages, and you will finally see free space. 
    So, I just told you why what you are doing is probably not working. 

    This seems like a system down, emergency situation. Especially from your attitude. 
    The suggestions so far are reindex, which you cannot do, Move the data out, and recreate the table, which you also cannot or will not do.
    Now, for the third idea, expanding upon Jeff's question.
    What is the clustered index on this table? Delete the rows in the clustered index order. I can't tell what the clustered index is from your picture, so I can't give you any direction there. If you post the actual create table statement, and include the indexes, we may be be able to give you some better help.
    Fourth idea.
    Drop all of the indexes, and then run your delete in the same manner. Shrink the database, and re-create the indexes.

    Your delete statement, again, we can't be sure because you won't post real code, looks like it's non-sargable.  Please post the actual DDL of the table, indexes, and your delete statement.

    Thanks.

    I think we have enough information now. The hard disk is extended virtually and we will try rebuild the index using the addtional space.

    Didn't want to have an attititute but it is just weird to get answers pop-up in your mailbox that are about shrinking and reindexing, while I already mentioned those methods in my first post. Indeed the story about the pages makes sense and using the network to move data was also a good alternative.

    Thank you all and have a nice day.

    P.S. sp_spaceused 'tablename', 'true' did not fix it.

    Answers were "popping up" in your inbox because that is what works.  

    We tried to shrink, it didn't work.  
    Well, the obvious response, which is basic troubleshooting, is "Exactly how did you try to shrink".  You never gave us an answer, just an attitude,

    sp_spaceused doesn't fix anything. It only reports the space being used by the various parts of the tables.  We asked for that so we could actually see what was going on in the table, and armed with that information, we could have helped you better.

    But please continue doing what your are doing, which is stumbling along, finding one magic bullet after another on forums, and never really planing or learning about the platform you have to support.  Because the more people like you, the more money I make fixing systems that were doomed from the start.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Yeah we fixed it, we understood the underlying reason from the beginning, we have found our own solution, no magic bullet here, moving on now. Thanks again to all of you.

    "Because the more people like you, the more money I make fixing systems that were doomed from the start."

    I'm sure all people that installed SQL Server without specifying 25GB reserved space for Reindexing tasks are complete idiots. Let's see what happens if I push 150 GB of table records to all small scale SQL Servers you ever installed.

  • syen.j - Friday, February 10, 2017 6:31 AM

    Yeah we fixed it, we understood the underlying reason from the beginning, we have found our own solution, no magic bullet here, moving on now.

    Great.  Would you tell us what you did, please?  What is the extension of the hard drive space that did it for you?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • p.s.  Heh... after you get done rebuilding the clustered index of that size, answer my two questions and I'll tell you how to recover all the disk space that caused you to allocate to your MDF file and how to do it in about a quarter of the time and without having to worry about the log file at all.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • it's not rocket science :hehe::Whistling:

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • syen.j - Thursday, February 9, 2017 7:15 AM

    John Mitchell-245523 - Thursday, February 9, 2017 7:05 AM

    And the shrink command?

    OK, so your database is full up.  Since your table results look very similar to your database results, I'd say that's the only table in the database.  Is that right?  How many rows have you deleted from the table, and how many have been inserted during the same time?  Are there any triggers on the table?

    John

    Like I mentioned multiple time, we tried all shrink commands, we tried all GUI shrink commands, we are not inserting anything. There or no triggers on the thable. All is static.
    This is the only big table, the rest are small tables (4).
    We have a lot of experience in shrinking and maintaining SQL, this is not a beginner issue.

    Is the database using full recovery model ?  If so, what is the status of trx log, and is it on different set of spindles, please ?
    What was the original storage allocation for each of the dat files used ? 
    For file growth, has been "auto" or managed ?
    Is this local hard-disk RAID storage, DAS, NAS, SAN ?
    If you are dead-in-water, and you know you have a viable backup set, consider: dropping the index, then switch recovery model to simple; perform shrink file; {{ status file sizing ? }} ; switch back into full recovery and get new full and log backup, to start new log chain, etc. Depending on the overall status at this point, create new index, if all of the un-needed rows have been purged
    If your sizing is inline with the original size allocations, then you may need to realign these watermark levels. 
    On busy boxes, as you may already know, the OS can take it's good old time to actually report the newly freed up space, so you might open a new rdp session 6 hours later and see a miraculous discovery of new free disk space. 
    Sorry, not sure if already stated, but what is the OS ?  
    If you have other volumes available on the same box and you have multiple dat files for the filegroup(s) being used, try adding new on other drive vol, then use shrinkfile to transfer data to it then do the drop empty file routine for file on original volume.
    Long-shot if all else fails to this point, and you have storage available to you on a different vol., with index dropped, write all the rows out of problem table to newly created heap (savior-table) existing on the other volume.  Once you can confirm that you have a valid copy of the original table, drop the original table.
    Then consider possible new filegroup/files configs to use for original table reconstruction.
      With a new table build, there would be questions around when to apply the index: before to reload the heap or after. and importantly, whether this re-loading of newly created table should be done in simple recovery model.
    If your application(s) are down, work thru in simple, then bring back into full with new baks before letting applications back in to use.
    Hoping things will improve with your situation soon.

  • Something baffles me. You're using 70GB of data between data and indexes.
    Why is it so important to recover 6GB when the logic indicates that you'll use it again? If you don't leave any unused space, you'll end up having to grow your files and that is an expensive operation that will cause performance problems.
    When you mention that you have a lot of experience shrinking the database, my intuition says that you do this too often without planning for growth.
    This seems to be one of the problems that require a better budget to complete them (get additional storage).

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Luis Cazares - Friday, February 10, 2017 12:27 PM

    Something baffles me. You're using 70GB of data between data and indexes.
    Why is it so important to recover 6GB when the logic indicates that you'll use it again? If you don't leave any unused space, you'll end up having to grow your files and that is an expensive operation that will cause performance problems.
    When you mention that you have a lot of experience shrinking the database, my intuition says that you do this too often without planning for growth.
    This seems to be one of the problems that require a better budget to complete them (get additional storage).

    Heh... not to mention what rebuilding the Clustered Index does to the MDF file and what reorganizing the Clustered Index does to the log file. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 14 posts - 16 through 28 (of 28 total)

You must be logged in to reply to this topic. Login to reply