dropping a column doesnt create space at all..?

  • There is database which has a table of more than 200gb. I was told one column on that table takes more than 100 gb . I need to create space for test environment. I had dropped the column and recreated but no space was created. Then i shrank the mdf and ldfs still the same. Even shrinking the db didnt help. I am not sure what am i missing?

    i) Is there a way to find out the size of column inside a table?

    ii) how can i create space on that table?

    Thanks

  • Try

    DBCC CLEANTABLEHope this helps you.

  • Try rebuilding the clustered index.

    By dropping the column you've created some free space on each page. Shrink only deallocates completely free pages.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/2/2010)


    Try rebuilding the clustered index.

    By dropping the column you've created some free space on each page. Shrink only deallocates completely free pages.

    Forgot to mention, tried that one too..:(

  • How do you know that the individual column used 100GB?

    It could be that deleting the column freed a certain amount of space on each data page, but that the table definition is such that no other data could occupy the space that was freed. Please will you post a CREATE TABLE statement for your table? How many rows does it contain?

    John

  • John Mitchell-245523 (12/2/2010)


    How do you know that the individual column used 100GB?

    It could be that deleting the column freed a certain amount of space on each data page, but that the table definition is such that no other data could occupy the space that was freed. Please will you post a CREATE TABLE statement for your table? How many rows does it contain?

    John

    That was my other question. How can i findout how much space is used by a specific column in a table.

  • Suresh B. (12/1/2010)


    Try

    DBCC CLEANTABLEHope this helps you.

    Thanks Suresh. Theoritically it seems like it will work. I ran DBCC CLEANTABLE (Test_db,"dbo.Mytable") and took 7 hours and is still running. Mytable is 200gb. I do see change in I/O from sp_who2. I think this is very slow. Does this work? Thanks

  • sqldba_icon (12/2/2010)


    Suresh B. (12/1/2010)


    Try

    DBCC CLEANTABLEHope this helps you.

    Thanks Suresh. Theoritically it seems like it will work. I ran DBCC CLEANTABLE (Test_db,"dbo.Mytable") and took 7 hours and is still running. Mytable is 200gb. I do see change in I/O from sp_who2. I think this is very slow. Does this work? Thanks

    As a sidebar and especially if you have the Enterprise Edition, you should probably consider partitioning that table.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • sqldba_icon (12/2/2010)


    John Mitchell-245523 (12/2/2010)


    How do you know that the individual column used 100GB?

    It could be that deleting the column freed a certain amount of space on each data page, but that the table definition is such that no other data could occupy the space that was freed. Please will you post a CREATE TABLE statement for your table? How many rows does it contain?

    That was my other question. How can i findout how much space is used by a specific column in a table.

    Two methods to finding table size. Column size can only be estimated by doing the Second method (the math version).

    First: Go to SSMS. In the Summary page, go to databases, then open up a database. Click Report and choose "Disk Usage". It takes several minutes for this to generate. Once it's done, there will be an option under the graphs called "Disk Spaced Used by Tables." It will tell you the size of each table, the # of records, etc. I don't know if this report includes Large Objects or Large Values in it's estimates, BTW.

    Second: By doing math. Searching "data types [SQL Server]" (without the quotes) in Books Online takes you to a page with every single data type. By clicking on the page's link to those data types, you can find the number of bytes each datatype takes. Count the number of columns for each data type you have and for the absolute numerics, you can just multiple the number of columns by the number of bytes. For the strings and approximate numerics, you have to actually add the size to figure out the space each column is taking. Char values (non-variable strings) are a solid 1 to 1 match. Variable values (anything with "var" in the name) adds 2 bytes of overhead for each column. That overhead is what SQL Server needs for adjusting the size of the field.

    Add those totals all together, multiple it by the number of rows you have in the table, and you have your approximate Table Size. Understand if you have a lot of variable columns that your number might be over what the table size actually is, but in SQL it's better to overestimate than underestimate.

    And remember, Large Object and Large Value data types (Text, NText, VarBinary, and MAX vars) are all stored separately from the main data pages. Books Online can give you details on those. This means that the table pages have, for lack of a better term, pointers that goes to this data and this data isn't really considered in the row size for the table.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (12/3/2010)


    sqldba_icon (12/2/2010)


    John Mitchell-245523 (12/2/2010)


    How do you know that the individual column used 100GB?

    It could be that deleting the column freed a certain amount of space on each data page, but that the table definition is such that no other data could occupy the space that was freed. Please will you post a CREATE TABLE statement for your table? How many rows does it contain?

    That was my other question. How can i findout how much space is used by a specific column in a table.

    Two methods to finding table size. Column size can only be estimated by doing the Second method (the math version).

    First: Go to SSMS. In the Summary page, go to databases, then open up a database. Click Report and choose "Disk Usage". It takes several minutes for this to generate. Once it's done, there will be an option under the graphs called "Disk Spaced Used by Tables." It will tell you the size of each table, the # of records, etc. I don't know if this report includes Large Objects or Large Values in it's estimates, BTW.

    Second: By doing math. Searching "data types [SQL Server]" (without the quotes) in Books Online takes you to a page with every single data type. By clicking on the page's link to those data types, you can find the number of bytes each datatype takes. Count the number of columns for each data type you have and for the absolute numerics, you can just multiple the number of columns by the number of bytes. For the strings and approximate numerics, you have to actually add the size to figure out the space each column is taking. Char values (non-variable strings) are a solid 1 to 1 match. Variable values (anything with "var" in the name) adds 2 bytes of overhead for each column. That overhead is what SQL Server needs for adjusting the size of the field.

    Add those totals all together, multiple it by the number of rows you have in the table, and you have your approximate Table Size. Understand if you have a lot of variable columns that your number might be over what the table size actually is, but in SQL it's better to overestimate than underestimate.

    And remember, Large Object and Large Value data types (Text, NText, VarBinary, and MAX vars) are all stored separately from the main data pages. Books Online can give you details on those. This means that the table pages have, for lack of a better term, pointers that goes to this data and this data isn't really considered in the row size for the table.

    Brandie the column i have deleted is a ntext data type. Would DBCC Cleantable still help? If not how do i clear space , please advice.

  • sqldba_icon (12/3/2010)


    Brandie the column i have deleted is a ntext data type.

    Would DBCC Cleantable still help? If not how do i clear space , please advice.

    Do you have a copy of SQL Server Books Online on your PC? If not, I advise downloading a copy from Microsoft. It's a wonderful resource.

    Books Online (DBCC CLEANTABLE)


    DBCC CLEANTABLE reclaims space after a variable-length column is dropped. A variable-length column can be one of the following data types: varchar, nvarchar, varchar(max), nvarchar(max), varbinary, varbinary(max), text, ntext, image, sql_variant, and xml. The command does not reclaim space after a fixed-length column is dropped.

    If the dropped columns were stored in-row, DBCC CLEANTABLE reclaims space from the IN_ROW_DATA allocation unit of the table. If the columns were stored off-row, space is reclaimed from either the ROW_OVERFLOW_DATA or the LOB_DATA allocation unit depending on the data type of the dropped column. If reclaiming space from a ROW_OVERFLOW_DATA or LOB_DATA page results in an empty page, DBCC CLEANTABLE removes the page. For more information about allocation units and data types, see Table and Index Architecture.

    How many LOB columns does this table have? If more than just that one, chances are your data pages still have information on them and you can't clean them up.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I think this is a job for "Captain Obvious"! 😀

    How about creating a new table, populating the new table and dropping the old one?

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.
  • Paul,

    Repopulating a table isn't necessarily that easy. Especially if it has foreign keys, indexes, constraints, etc. If it's a heap, then not so bad. But you're also forgetting the question of whether or not this table is in Production and the company in question is subject to SOX laws (or other laws that make it illegal to mess with the database using such a quick fix and no documentation).

    Yes, it's obvious. No, it's not a good idea.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • sqldba_icon (12/3/2010)


    Brandie the column i have deleted is a ntext data type. Would DBCC Cleantable still help? If not how do i clear space , please advice.

    Yes. DBCC CLEANTABLE works for ntext data type.

    By the way did the DBCC CLEANTABLE completed?

    If the table has large number of rows, there is an option to mention the number of rows processed per transaction.

    For example:DBCC CLEANTABLE (Test_db,"dbo.Mytable", 1000)

    Please read the BOL for more information.

  • Brandie Tarvin (12/3/2010)


    Repopulating a table isn't necessarily that easy. Especially if it has foreign keys, indexes, constraints, etc. If it's a heap, then not so bad. But you're also forgetting the question of whether or not this table is in Production and the company in question is subject to SOX laws (or other laws that make it illegal to mess with the database using such a quick fix and no documentation).

    Brandie,

    We do reorg very large tables with FKs, Indexes, Constraints, Triggers, etc., all the time - that's what we call "maintenance".

    Last but not least, please point me to the exact SOX paragraph where it says you cannot reorg a table - that's rubbish.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply