September 20, 2018 at 9:35 am
1. We have a table, which has about 12,000,000 records. When I ran a Disk Usage by Top Tables report yesterday, this table comes back with the following figures:
Records: 12,953,449
Reserved (KB): 21,227,904
Data (KB): 19,471,664
Indexs (KB): 1,752,744
Unused (KB): 3,496
2. There are 3 non-clusted indexs on this table
3. I accidentally deleted a column from this table. No problem, I did the following:
- recreated the NVARCHAR(MAX) column that I deleted
- restored a recent backup of this database to another temporary database
- did a update query for the now empty column from the restored database.
- All worked fine, column with data is there, just like it was before.
4. Problem - the database got much larger. When I run the Disk Usage by Top Tables report, this same table now shows:
Records: 12,953,449
Reserved (KB): 37,771,552
Data (KB): 36,011,968
Indexs (KB): 1,754,152
Unused (KB): 5,432
5. So my questions are:
- How did the Data (KB) figure go up by about 16Gb on this table?
- Is there anything I can do about it?
- Would it have been better to recover the entire table from the restored database into my 'production' database? If yes, whats the best method?
Thanks.
September 20, 2018 at 9:44 am
September 20, 2018 at 12:03 pm
When you deleted the column, the contents already on disk were not deleted. New rows will go in without the column, but the old data stays on disk until you rebuild the index or run DBCC CLEANTABLE (https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-cleantable-transact-sql?view=sql-server-2016).
When you added the column and restored the data, you didn't replace what you had deleted - you added an additional column to the table and filled the column with data. It's loosely the same effect as if you added the column and filled it with data before you dropped the original column. You can get your space back with an index rebuild or CLEANTABLE, with the same caveats that you will need free space for the rebuilt table during processing.
-Eddie
Eddie Wuerch
MCM: SQL
September 20, 2018 at 12:23 pm
Note while shrinkdb is typically not recommended, it is recommended to take a look at index fragmentation after the fact and rebuild / reorg as necessary. I would probably just turn ola's index scripts loose on the db after a shrink. If your prod db is in simple recovery i would recommend going bulk logged or full recovery model and then implement regular log backups. This will keep log file growth down. (assuming your schedule is pretty tight, like log backups every 1 minute)
September 20, 2018 at 12:25 pm
thalsell - Thursday, September 20, 2018 12:23 PMNote while shrinkdb is typically not recommended, it is recommended to take a look at index fragmentation after the fact and rebuild / reorg as necessary. I would probably just turn ola's index scripts loose on the db after a shrink. If your prod db is in simple recovery i would recommend going bulk logged or full recovery model and then implement regular log backups. This will keep log file growth down. (assuming your schedule is pretty tight, like log backups every 1 minute)
The OP shrunk the log file, which will not affect the index fragmentation
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
September 20, 2018 at 12:41 pm
Michael L John - Thursday, September 20, 2018 12:25 PMthalsell - Thursday, September 20, 2018 12:23 PMNote while shrinkdb is typically not recommended, it is recommended to take a look at index fragmentation after the fact and rebuild / reorg as necessary. I would probably just turn ola's index scripts loose on the db after a shrink. If your prod db is in simple recovery i would recommend going bulk logged or full recovery model and then implement regular log backups. This will keep log file growth down. (assuming your schedule is pretty tight, like log backups every 1 minute)The OP shrunk the log file, which will not affect the index fragmentation
True.
Allow me to clarify, and preface my original reply with "You could shrink the data file to claim back some disk space, but I wouldn't recommend it. But if you are considering it...".
🙂
September 20, 2018 at 12:50 pm
thalsell - Thursday, September 20, 2018 12:41 PMMichael L John - Thursday, September 20, 2018 12:25 PMthalsell - Thursday, September 20, 2018 12:23 PMNote while shrinkdb is typically not recommended, it is recommended to take a look at index fragmentation after the fact and rebuild / reorg as necessary. I would probably just turn ola's index scripts loose on the db after a shrink. If your prod db is in simple recovery i would recommend going bulk logged or full recovery model and then implement regular log backups. This will keep log file growth down. (assuming your schedule is pretty tight, like log backups every 1 minute)The OP shrunk the log file, which will not affect the index fragmentation
True.
Allow me to clarify, and preface my original reply with "You could shrink the data file to claim back some disk space, but I wouldn't recommend it. But if you are considering it...".
🙂
Take the help of default trace
September 20, 2018 at 12:55 pm
thalsell - Thursday, September 20, 2018 12:41 PMMichael L John - Thursday, September 20, 2018 12:25 PMthalsell - Thursday, September 20, 2018 12:23 PMNote while shrinkdb is typically not recommended, it is recommended to take a look at index fragmentation after the fact and rebuild / reorg as necessary. I would probably just turn ola's index scripts loose on the db after a shrink. If your prod db is in simple recovery i would recommend going bulk logged or full recovery model and then implement regular log backups. This will keep log file growth down. (assuming your schedule is pretty tight, like log backups every 1 minute)The OP shrunk the log file, which will not affect the index fragmentation
True.
Allow me to clarify, and preface my original reply with "You could shrink the data file to claim back some disk space, but I wouldn't recommend it. But if you are considering it...".
🙂
You original post, while technically correct, is a blanket recommendation that may or may not apply to the situation.
As an example, if the database is a data warehouse, with a daily ETL process to populate it, simple recovery is perfectly acceptable. There is no need for a point in time recovery. It's far simpler to rebuilt the database.
"Turning Ola's scripts loose" may also break things. I have not reindexed for over a year. Performance is actually better.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
September 20, 2018 at 2:35 pm
Eddie Wuerch - Thursday, September 20, 2018 12:03 PMWhen you deleted the column, the contents already on disk were not deleted. New rows will go in without the column, but the old data stays on disk until you rebuild the index or run DBCC CLEANTABLE (https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-cleantable-transact-sql?view=sql-server-2016).When you added the column and restored the data, you didn't replace what you had deleted - you added an additional column to the table and filled the column with data. It's loosely the same effect as if you added the column and filled it with data before you dropped the original column. You can get your space back with an index rebuild or CLEANTABLE, with the same caveats that you will need free space for the rebuilt table during processing.
-Eddie
Thanks, make sense. Now I'll go try to makes sense of the other replies. Fortunately, I can restores backups of the production database into test databases and try some things out.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply