October 16, 2012 at 5:32 pm
We recently upgrade to sql standard to sql Enterprise. We ran a maintenance job which defrags, reindex and cleans the table of our largest tables and runs a shrinkfiles on the logs to clean up after each one. Our database grew from 74 gig to 95 gig we ran the task to shrink the data file and released the unused space this freed up 2 gig. Would upgrading to standard to enterprise cause the increase in data size, or do we need to tweak the settings in the maintenance job? Dbreindex is set to 100 and the cleantable set to 0 shrink file set to 0 and the truncate only any assistance is greatly appreciated
October 16, 2012 at 6:39 pm
I'm not sure what caused your data file to grow and why you were unable to shrink it more than 2 gb, I wouldn't worry about it unless I really REALLY needed that disk space for something else and was absolutely sure that I'd never need it for this database.
Now, there is a time and a place for everything, but shrinking can be bad for your SQL Server's health and is usually better left undone except on an ad hoc basis where it's either absolutely necessary or you know for sure you have a lot more filespace than you'll need in the foreseeable future. Shrinking as part of a regular scheduled maintenance job is usually a good way to cause more problems than you fix: It will add extra overhead and consume system resources + your file systems will gradually get more and more fragmented when you're constantly shrinking a little here and growing a little there.
Vegard Hagen
Norwegian DBA, occasional blogger and generally a nice guy who believes the world is big enough for all of us.
@vegard_hagen on Twitter
Blog: Vegards corner (No actual SQL stuff here - havent found my niche yet. Maybe some day...)
November 9, 2012 at 10:30 am
Is there away without shrinking the database to get back the space when we delete big files in our database?
November 9, 2012 at 11:38 am
Its not wise to do shrink operations on production dbs.
I have a script (not on this laptop) that shows the size of all the tables in the db.
If you restore the previous db elsewhere if you have a backup of it. Then run the script on both dbs it will tell you what tables have grown. You should be able to find such scripts on the net.
SQL COMPARE(REDGATE) will show the row mismatch.You maybe able to get an EVAL version for free.
November 9, 2012 at 2:29 pm
Bobby Glover (11/9/2012)
...SQL COMPARE(REDGATE) will show the row mismatch.You maybe able to get an EVAL version for free.
If you need to compare the data sql compare is a good choice (Note this site is run by redgate but i dont work for them, they just make good stuff) and it has a 14 day trial.
For performance Issues see how we like them posted here: How to Post Performance Problems - Gail Shaw[/url]
Need to Split some strings? Jeff Moden's DelimitedSplit8K[/url]
Jeff Moden's Cross tab and Pivots Part 1[/url]
Jeff Moden's Cross tab and Pivots Part 2[/url]
November 9, 2012 at 4:32 pm
What are the results when you run the script on the link below in your database?
Script to analyze table space usage:
November 9, 2012 at 4:33 pm
let me check
November 9, 2012 at 4:51 pm
i get this
FileSizeMBUsedSpaceMBUnusedSpaceMB
0.750.290.46
83198.0074874.758323.25
83198.7574875.048323.71
November 9, 2012 at 7:24 pm
dsanchez-630020 (11/9/2012)
i get thisFileSizeMBUsedSpaceMBUnusedSpaceMB
0.750.290.46
83198.0074874.758323.25
83198.7574875.048323.71
Why did you remove the Type and DBFileName columns from the query?
Why did you leave out the table size query result set?
November 10, 2012 at 2:12 am
Try this
-- Table and row count information
SELECT OBJECT_NAME(ps.[object_id]) AS [TableName],
i.name AS [IndexName], SUM(ps.row_count) AS [RowCount]
FROM sys.dm_db_partition_stats AS ps
INNER JOIN sys.indexes AS i
ON i.[object_id] = ps.[object_id]
AND i.index_id = ps.index_id
WHERE i.type_desc IN ('CLUSTERED','HEAP')
AND i.[object_id] > 100
AND OBJECT_SCHEMA_NAME(ps.[object_id]) <> 'sys'
GROUP BY ps.[object_id], i.name
ORDER BY SUM(ps.row_count) DESC;
November 10, 2012 at 4:14 am
If I were you firstly I looked Standart Reports Disk Usage
And if index usage is high i look at the index fill factor
if it is low and tables are not much insert update or delete
i increase that table fill factor.
And the other hand i suggest i need table compresion
if you have enough cpu resource you can compress table which
mostly read activity.
November 10, 2012 at 6:45 am
Try this for table and index usage info. If updates are high since last sql server restart then set the fill factor should be set low.
use databasename
go
SELECT
OBJECT_NAME(S.[OBJECT_ID]) AS [OBJECT NAME],
I.[NAME] AS [INDEX NAME],
USER_SEEKS
,
USER_SCANS,
USER_LOOKUPS
,
USER_UPDATES
FROM SYS.DM_DB_INDEX_USAGE_STATS AS S
INNER JOIN SYS.INDEXES AS I
ON I.[OBJECT_ID] = S.[OBJECT_ID]
AND I.INDEX_ID = S.INDEX_ID
WHERE
OBJECTPROPERTY(S.[OBJECT_ID],'IsUserTable') = 1
Not my script I found it on the net. I'm not on my laptop
November 12, 2012 at 2:52 am
dsanchez-630020 (11/9/2012)
Is there away without shrinking the database to get back the space when we delete big files in our database?
did you try rebuilding your indexes ?
-----------------------------------------------------------------------------
संकेत कोकणे
November 12, 2012 at 4:53 am
Hi,
If space is an issue you can archive data from you biggest tables and make them accessible via a view.
When you archive data based on years to seperate DBs, you can take older
DBs offiline.
Here's the code I used to check the size of my tables and DBs.
DECLARE cTbl Cursor FOR SELECT Name FROM sys.Tables WHERE is_ms_shipped = 0 AND [Schema_ID] = 1
DECLARE
@sql Varchar(MAX),
@Tbl NVarchar(128)
CREATE TABLE #Space
(
Name Varchar(128),
Rows BigInt,
ResVarchar(20),
Data Varchar(20),
Idx Varchar(20),
Unused Varchar(20)
)
OPEN cTbl
FETCH NEXT FROM cTbl INTO @Tbl
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'INSERT INTO #Space EXEC sp_SpaceUsed [' + @Tbl + ']'
EXEC (@SQL)
FETCH NEXT FROM cTbl INTO @Tbl
END
CLOSE cTbl
DEALLOCATE cTbl
SELECTName [Table Name],
Rows,
CAST(LEFT(Res, LEN(Res) - 3) AS Int) / 1024 ResMB,
CAST(LEFT(Data, LEN(Data) - 3) AS Int) / 1024 DataMB,
CAST(LEFT(Idx, LEN(Idx) - 3) AS Int) / 1024 IdxMB,
CAST(LEFT(Unused, LEN(Unused) - 3) AS Int) / 1024 UnusedMB
INTO#SpaceMB
FROM#Space
SELECT * FROM #SpaceMB WHERE Rows > 0 ORDER BY Rows DESC
SELECT'Total MB',
SUM(Rows) Rows,
SUM(ResMB) ResMB,
SUM(DataMB) DataMB,
SUM(IdxMB) IdxMB,
SUM(UnusedMB) UnusedMB
FROM #SpaceMB
SELECT 'Total GB',
SUM(Rows) Rows,
SUM(ResMB) / 1024 ResGB,
SUM(DataMB) / 1024 DataGB,
SUM(IdxMB) / 1024 IdxGB,
SUM(UnusedMB) / 1024 Unused
FROM #SpaceMB
DROP TABLE #Space
DROP TABLE #SpaceMB
Hope this is useful.
November 13, 2012 at 8:09 am
dsanchez-630020 (11/9/2012)
Is there away without shrinking the database to get back the space when we delete big files in our database?
Paul Randal's Shrink Alternative:
So what if you *do* need to run a shrink? For instance, if you've deleted a large proportion of a very large database and the database isn't likely to grow, or you need to empty a file before removing it?
The method I like to recommend is as follows:
• Create a new filegroup
• Move all affected tables and indexes into the new filegroup using the CREATE INDEX ... WITH (DROP_EXISTING) ON <filegroup> syntax, to move the tables and remove fragmentation from them at the same time
• Drop the old filegroup that you were going to shrink anyway (or shrink it way down if its the primary filegroup)
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply