October 1, 2020 at 10:26 am
Hello,
I have a question about some maintenance.
I need to do some shrink for 2TB from 7TB allocated DB, because we did some deleting and set some auto-deleting so should not be now drastically increased and we want to have more free space on disks.
I had an idea below
or it can be slower due to decompression?
Did anybody test it or any other idea?
Thanks
October 1, 2020 at 4:23 pm
Do you use secondary filegroups? If you do, I'd create a new filegroup and files and do the data compression into that new filegroup. Then you can shrink the original filegroups, and reorg / rebuild tables there as needed without affecting the big/huge newly compressed tables, which will save huge amounts of time and effort.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
October 1, 2020 at 6:44 pm
Scott has a good idea. Compression reads the data and then rewrites some, so that's a load. Shrink just moves pages around from the end of the file to the middle, and then lessens the allocation from the OS. I'd compress and then shrink since you'd have less to actually move.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply