January 1, 2019 at 4:20 pm
Eirikur Eiriksson - Tuesday, January 1, 2019 10:25 AMJeff Moden - Tuesday, January 1, 2019 9:26 AMAs I remind people, "Before you can think outside the box, you must first realize... you're in a box". 😀I know how to spell the name of that box, starts with a SH and ends in IT
😎
BWAAA-HAAA!!!! You said a mouthful there! 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
January 4, 2019 at 9:14 am
Great details, I got back 1 TB of space by changing to CCI indexes. Eventually I will use it up with database growth.
Shrinking is not a good idea as it may actually increase the size and cause more fragmentation etc.
I am going to leave the databases as they are, eventually with data growth it shall fill up the empty space within the database.
Thanks for the input so far.
January 4, 2019 at 9:32 am
TRACEY-320982 - Friday, January 4, 2019 9:14 AMGreat details, I got back 1 TB of space by changing to CCI indexes. Eventually I will use it up with database growth.
Shrinking is not a good idea as it may actually increase the size and cause more fragmentation etc.I am going to leave the databases as they are, eventually with data growth it shall fill up the empty space within the database.
Thanks for the input so far.
Heh... until someone tells you they need a copy of the database somewhere and they take exception to the 1TB of fluff that just prevented them from restoring your DB to a lesser environment. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
January 7, 2019 at 4:10 am
You may use this query to shrink data file.
declare @src bigint,@sql varchar(250),@filename varchar(25),@shrink bigint
set @filename='BEL_CDR'--change your data file name
select @shrink = size/128.0 from sysfiles(nolock) where name = @filename
select @src = size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 from sysfiles(nolock) where name = @filename
select @src
while (@src >= 100 and @shrink >=0)
begin
set @sql = 'dbcc shrinkfile ('+@filename+',' + convert(varchar(10),@shrink) + ')'
print @sql
exec (@sql)
select @shrink = size/128.0 from sysfiles(nolock) where name = @filename
select @src = size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 from sysfiles(nolock) where name = @filename
set @shrink = @shrink - 100
end
January 7, 2019 at 5:36 am
sundar329 - Monday, January 7, 2019 4:10 AMYou may use this query to shrink data file.declare @src bigint,@sql varchar(250),@filename varchar(25),@shrink bigint
set @filename='BEL_CDR'--change your data file name
select @shrink = size/128.0 from sysfiles(nolock) where name = @filename
select @src = size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 from sysfiles(nolock) where name = @filename
select @src
while (@src >= 100 and @shrink >=0)
begin
set @sql = 'dbcc shrinkfile ('+@filename+',' + convert(varchar(10),@shrink) + ')'
print @sql
exec (@sql)
select @shrink = size/128.0 from sysfiles(nolock) where name = @filename
select @src = size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 from sysfiles(nolock) where name = @filename
set @shrink = @shrink - 100
end
Just a recommendation...
sysfiles has been deprecated since 2005. Although that doesn't mean it's going away tomorrow, why build in known technical debt into current scripts? The MS mapping indicates that sys.database_files should be used for this instead.
https://docs.microsoft.com/en-us/sql/relational-databases/system-tables/mapping-system-tables-to-system-views-transact-sql?view=sql-server-2017
In fact, if you lookup the FILEPROPERTY function, it explicitly mentions the use of sys.database_files as one of the tables (system view, really) that can be used with the function.
https://docs.microsoft.com/en-us/sql/t-sql/functions/fileproperty-transact-sql?view=sql-server-2017
This still doesn't solve the problem of "Index Inversion" and the resulting massive fragmentation resulting in fragment size as little as 1 page (the worst type of fragmentation possible). Of course, the indexes will need to be rebuilt and that can result in unwanted "free space" that is frequently about 20% larger than the largest index that was rebuilt.
If that extra free space is undesirable, my recommendation for the post-shrink index rebuilds is that it's going to take some extra but temporary free space to rebuild the indexes without the extra free space that would show up in the MDF/NDF file (whichever). You can pull off such a thing using the following steps. It DOES take some extra time because the largest index actually needs to be rebuilt twice.
1. Identify a drive with enough free space to handle the largest index plus about 30% to be on the safe side.
2. Create a filegroup and single file on that drive.
3. Do a CREATE INDEX using the WITH DROP_EXISTING option to move the index to that new filegroup/file.
4. Rebuild all of the other indexes that need rebuilding in the original file starting with the smallest and continuing to the largest to take advantage of the extra free space that may be incrementally generated along the way. This will sometimes also make the larger indexes more contiguous with larger fragment sizes.
5. Repeat step 3 to move the index from the temporary file group back to the original. This, of course, causes this large index to be rebuilt just as it did in step 3.
6. Drop the temporary file and then drop the temporary file group to release that space.
One option that I use to keep from blowing out the transaction log file and the related backup sizes during the above evolution is as follows...
1. Take a differential backup prior to starting, just to be safe and to minimize exposure to any problems.
2. Set the database to the BULK-LOGGED recovery model to allow "Minimal Logging" for all index creation/rebuilds with the understanding that any log file backups taken during this evolution can't take be restored to a point-in-time to the middle of one of those log file backups if any "Minimally Logged" operation is present. You either restore to the beginning of one of those backups or you use the whole backup. You can further mitigate the risk by temporarily decreasing the time between log file backups to something less than 15 minutes (for example).
3. Once you've done that, then you can go through the rebuild process previously outlined above.
4. Once that's complete, change the Recovery Model back to FULL and take a log file backup.
5. Don't forget to change your log file "frequency" back to whatever you consider to be "normal" (I keep mine at 30 minutes or less and some folks keep them at 15 minutes anyway).
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 5 posts - 16 through 19 (of 19 total)
You must be logged in to reply to this topic. Login to reply