October 23, 2011 at 11:56 am
Are there any alternatives to shrinkfile. We have a database that is was initially sized too big and we need the disk space back. I have been running shrinkfile all weekend with no results.
I was thinking I could backup the database , drop the original, create a new db with the same name and new size, and then restore from the original backup.
I plan on researching this some today, but busy right now and thought I would just throw out a quick line in case someone could help me untit I got time to focus on this.
So if anyone knows any shrinkfile alts, or why a shrinkfile would take so long to run, it would help save me time later today.
thanks in advance.
October 23, 2011 at 12:29 pm
nawillia (10/23/2011)
I was thinking I could backup the database , drop the original, create a new db with the same name and new size, and then restore from the original backup.
When you restore a backup over an existing database, the existing database is discarded and the database from the backup put in it's place exactly, 100% as it was at time of backup
So if anyone knows any shrinkfile alts, or why a shrinkfile would take so long to run, it would help save me time later today.
Alternates to shrinkfile, not really. Why it takes so long, lots of possible reasons, LOB columns being among the most likely
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 23, 2011 at 4:26 pm
Thanks Gail.
That's what I thought about restoring a backup -- that it would replace the new db with all it's settings, but I wasn't 100% and that's what I just got around to researching.
thanks again.
October 23, 2011 at 8:59 pm
You can disable non-clustered indexes, which frees up space, reducing the amount of data that needs moved in the shrinkfile.
You'll almost certainly need to rebuild your indexes after a shrinkfile anyway.
If you wanted to get really fancy, you could use dbcc ind to identify the indexes that are toward the end of the file and drop them. (hmm, I may put together a script to do this tomorrow.)
I've gotten a ten fold increase in shrinkfile performance by dropping indexes first.
Good luck!
October 24, 2011 at 3:18 am
SpringTownDBA (10/23/2011)
You can disable non-clustered indexes, which frees up space, reducing the amount of data that needs moved in the shrinkfile.You'll almost certainly need to rebuild your indexes after a shrinkfile anyway.
If you do that, just make sure that you don't shrink below the size of the file before the indexes were dropped, or rebuilding them will require a autogrow.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 24, 2011 at 4:31 am
Hi,
which file is too big like data file or log file?
If you want to take the backup of the db. it will take the backup of only used extends.But at the same time when you restored the backup. It will required more space which is actually in the database size.
If the log file is too big. you don't want the log. you can use any one of the following methods.
1. Change the recovery model as Simple Or
2. backup log dbname with truncate_only [This query will not support in sqlserver 2008 env] Or
3. change the db into sinlge_user then change into multiuser, detach the database. remove the
log file then attach the database with out the log file. It will create a new log file automatically
with minimum size.
Hope this helps.
Regards
Balaji G
October 24, 2011 at 4:43 am
balaji.ganga 68339 (10/24/2011)
3. change the db into sinlge_user then change into multiuser, detach the database. remove thelog file then attach the database with out the log file. It will create a new log file automatically
with minimum size.
No, no, no!!!! Never, never, never delete the log. It's not an optional piece of the database and SQL cannot always simply recreate it. I've seen several cases of major data loss even complete loss of the entire database after a log file was deleted.
That's about the worst log mismanagement imaginable and almost the most dangerous 'advice' you could possibly give.
p.s. truncating the log is also a very bad idea because of what it does to the log chain.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 24, 2011 at 7:21 am
balaji.ganga 68339 (10/24/2011)
Hi,which file is too big like data file or log file?
Regards
Balaji G
It's the data file. All of the db's i'm working with are in simple recovery mode.
I found out that none of the tables have a clustered index (103 gigs of data, 65 gigs of non clx indexes) , so i think that is probably why this db will not shrink fast.
In any case, the database was supposed to be temporary , per the developer using it, but the client keeps drawing out the usage.
So my recommendation will be to move it off the server asap (we did not plan for it's usage on the server and it's taking space from the other databases). When it's moved, the tables need a clustered index.
If we could shrink it down to 5% free space, we would save about 70 GB of disk space.
October 24, 2011 at 9:07 am
Ah ha moment -- so I didn't realize that
"TRUNCATEONLY
Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent.
target_size is ignored if specified with TRUNCATEONLY.
TRUNCATEONLY is applicable only to data files.
"
OR
"5.Optionally, select the Release unused space check box.
Selecting this option causes any unused space in the file to be released to the operating system and shrinks the file to the last allocated extent. This reduces the file size without moving any data.
"
So even though my shrinkfile command never completed, I was able to get some disk space back from the work that had been completed.
October 24, 2011 at 4:15 pm
Here's a script to identify the effected indexes and estimate how many MB would be moved by a shrinkfile operation. It can help you identify non-clustered indexes that could be dropped to speed up the shrinkfile.
-----------------------------------------------------------------------------------
-- Script to Indentify tables/index that will be affected by a shrinkfile.
--
-- This script calls "DBCC IND" against all indexes/heaps in the same filegroup as the target file,
-- identifying how many pages/mb from each would be moved by a shrinkfile.
--
--
-- Parameters:
-- @target_file_id -- file_id from sys.database_files corresponding to the file to be shrunk.
-- @target_file_size_in_MB -- the target size in MB of the file
--
-- Returns 2 Resultsets:
--1. Summary information including the amount of data that would be moved
--2. Detail information per affected index including pages/mb total, and
-- pages/mb of non-clustered-indexes
--
------------------------------------------------------------------------------------
SET NOCOUNT ON
DECLARE @target_file_id INT
DECLARE @target_file_size_in_MB INT
--Parameters, shrink file 3 to 7 GB.
SET @target_file_id = 3
SET @target_file_size_in_MB = 7000
--Temp table to hold output of DBCC IND
IF OBJECT_ID('tempdb..#IndexPages', 'table') IS NOT NULL
DROP TABLE #IndexPages
CREATE TABLE #IndexPages
(
[PageFID] [tinyint] NOT NULL
, [PagePID] [int] NOT NULL
, [IAMFID] [tinyint] NULL
, [IAMPID] [int] NULL
, [ObjectID] [int] NULL
, [IndexID] [int] NULL
, [PartitionNumber] [tinyint] NULL
, [PartitionID] [bigint] NULL
, [iam_chain_type] [varchar](30) NULL
, [PageType] [tinyint] NULL
, [IndexLevel] [tinyint] NULL
, [NextPageFID] [tinyint] NULL
, [NextPagePID] [int] NULL
, [PrevPageFID] [tinyint] NULL
, [PrevPagePID] [int] NULL
, CONSTRAINT [IndexPages_PK2] PRIMARY KEY CLUSTERED ( [PageFID] ASC, [PagePID] ASC )
)
--Use cursor to iterate over all indexes, populating table.
DECLARE curDbcc CURSOR STATIC FORWARD_ONLY
FOR
SELECT DISTINCT -- distinct required for partitioned data
'INSERT INTO #IndexPages
EXEC ( ' + QUOTENAME(+'DBCC IND ([' + DB_NAME() + '], ''[' + s.NAME + '].['
+ t.name + ']'', ' + CAST(index_id AS VARCHAR(10)) + ')', '''') + ')' dbcc_ind_sql
FROM
sys.schemas s
INNER JOIN sys.tables t ON s.schema_id = t.schema_id
INNER JOIN sys.indexes i ON t.object_id = i.object_id
INNER JOIN --Only scan indexes on the same filegroup as the @target_file_id
( SELECT
partition_scheme_id source_data_space_id
, data_space_id destination_data_space_id
FROM
sys.destination_data_spaces
UNION
SELECT
data_space_id
, data_space_id
FROM
sys.data_spaces
WHERE
type = 'FG'
) ds ON i.data_space_id = ds.source_data_space_id
INNER JOIN sys.database_files df ON df.file_id = @target_file_id
AND df.data_space_id = ds.destination_data_space_id
DECLARE @dbcc_sql NVARCHAR(MAX)
OPEN curDbcc
FETCH NEXT FROM curDbcc INTO @dbcc_sql
WHILE @@fetch_Status = 0
BEGIN
EXEC sp_executesql @dbcc_sql
FETCH NEXT FROM curDbcc INTO @dbcc_sql
END
CLOSE curDbcc
DEALLOCATE curDbcc
DECLARE @starting_page_to_clear INT
SET @starting_page_to_clear = @target_file_size_in_MB * 1024 / 8
SELECT
size file_size_in_pages
, size / 128 file_size_in_mb
, @starting_page_to_clear target_file_size_in_pages
, @target_file_size_in_MB target_file_size_in_mb
, ( size - @starting_page_to_clear ) pages_to_shrink
, ( size - @starting_page_to_clear ) / 128 mb_to_shrink
, ( SELECT
COUNT(*)
FROM
#IndexPages
WHERE
PageFID = @target_file_id
AND PagePID >= @starting_page_to_clear
) pages_to_move
, ( SELECT
COUNT(*)
FROM
#IndexPages
WHERE
PageFID = @target_file_id
AND PagePID >= @starting_page_to_clear
) / 128 mb_to_move
FROM
sys.database_files
WHERE
file_id = @target_file_id
SELECT
s.NAME schema_name
, t.NAME table_name
, i.NAME index_name
, COUNT(*) total_pages_to_move
, SUM(CASE WHEN index_id > 1 THEN 1
ELSE 0
END) non_clus_index_pages_to_move
, COUNT(*) / 128.0 total_mb_to_move
, SUM(CASE WHEN index_id > 1 THEN 1
ELSE 0
END) / 128.0 non_clus_index_mb_to_move
FROM
sys.schemas s
INNER JOIN sys.tables t ON s.schema_id = t.schema_id
INNER JOIN sys.indexes i ON t.object_id = i.object_id
INNER JOIN #IndexPages ip ON ip.ObjectID = t.object_id
AND ip.IndexID = i.index_id
WHERE
PageFID = @target_file_id
AND PagePID >= @starting_page_to_clear
GROUP BY
s.NAME
, t.NAME
, i.name
HAVING
COUNT(*) > 0
ORDER BY
SUM(CASE WHEN index_id > 1 THEN 1
ELSE 0
END) DESC
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply