October 15, 2009 at 6:53 am
Hi there, I have been having 'fun' with my msdb database. It was over 1Gb. After some digging the problem boiled down to the sysmaintplan_logdetail table. I deleted a load of useless log info but it still stayed at 1Gb. So, I ran a script to determine fragmentation and it says:
totalPages 120019
userPages 120009
dataPages1
totalspaceMb 937
usedSpaceMB 937
dataSpaceMB 0
Say am I right in thinking there is no data left but it is somehow fragmented beyond belief and taken up a lot of empty disk space?
I have run out of Google power, can anyone help? 😮
Thanks
Will
October 15, 2009 at 7:17 am
is this similiar to what you ran.
SELECT OBJECT_NAME(object_id),* FROM sys.dm_db_index_physical_stats
(db_id('msdb'),null,null,null,'detailed')
You say it is 1gb in size, what size is the mdf file and what size is the ldf file.
try selecting the msdb database and try running dbcc loginfo, and post what it returns
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
October 15, 2009 at 7:26 am
Sorry, msdbdata.mdf = 1,012,608 KB and msdblog.ldf = 10,752 KB.
Results from dbcc loginfo:
FileIdFileSizeStartOffsetFSeqNoStatusParityCreateLSN
22539528192 6870201280
2262144262144 6870301280
22621440524288 6870406468703000000026400001
226214403145728 6870526468703000000026400001
226214405767168 00068703000000026400001
226214408388608 00068703000000026400001
Hope this makes sense to you...!
October 15, 2009 at 7:27 am
My original disgnostic query:
SELECT object_name(i.object_id) as objectName,
i.[name] as indexName,
sum(a.total_pages) as totalPages,
sum(a.used_pages) as usedPages,
sum(a.data_pages) as dataPages,
(sum(a.total_pages) * 8) / 1024 as totalSpaceMB,
(sum(a.used_pages) * 8) / 1024 as usedSpaceMB,
(sum(a.data_pages) * 8) / 1024 as dataSpaceMB
FROM sys.indexes i
INNER JOIN sys.partitions p
ON i.object_id = p.object_id
AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a
ON p.partition_id = a.container_id
GROUP BY i.object_id, i.index_id, i.[name]
ORDER BY sum(a.total_pages) DESC, object_name(i.object_id)
October 15, 2009 at 8:54 am
If you are concerned about the physical file size of the msdb database, last resort is to shrink it and then after that rebuild the indexes.
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
October 15, 2009 at 8:58 am
My concern is the huge disparity between the actual data in the databse (i.e. almost none) and the space is it taking up (1Gb)! Shrinking doesn't seem to have any effect - which I thought meant there was no wasted space. The query I ran seems to suggest the (almost no) data is spattered all over the place in a highly fragmented way.
I may had interpretted this all wrong though but I cannot see why I need 1Gb of disk for a few Kb of data!
October 15, 2009 at 9:00 am
did you actually shrink it using the reorganise option?
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
October 15, 2009 at 9:03 am
No, just used shrink option from the SSMS (2005)...
October 15, 2009 at 9:09 am
When you shrink it using SSMS 20005 do you shrink the database or do you shrink files.
if you select to shrink by files, make sure that the file type is set to data and post what it says in the available free space and then try that and see if it makes a difference.
--------------------------------------------------------------------------------------
[highlight]Recommended Articles on How to help us help you and[/highlight]
[highlight]solve commonly asked questions[/highlight]
Forum Etiquette: How to post data/code on a forum to get the best help by Jeff Moden[/url]
Managing Transaction Logs by Gail Shaw[/url]
How to post Performance problems by Gail Shaw[/url]
Help, my database is corrupt. Now what? by Gail Shaw[/url]
October 15, 2009 at 2:55 pm
WillC9999 (10/15/2009)
My concern is the huge disparity between the actual data in the databse (i.e. almost none) and the space is it taking up (1Gb)! Shrinking doesn't seem to have any effect - which I thought meant there was no wasted space. The query I ran seems to suggest the (almost no) data is spattered all over the place in a highly fragmented way.I may had interpretted this all wrong though but I cannot see why I need 1Gb of disk for a few Kb of data!
Database files do not automatically grow and shrink based upon the amount of data they contain. They grow as data is added - and if, later on data is removed they stay the same size. If you know for sure that the database is never going to have data added, or the additional space is going to ever be used you can perform a one time shrink.
However, since all databases generally have data added to them - I would warn against shrinking the file. Once you do that, it is just going to grow again as new data is added. This will cause file level fragmentation and really depends upon the autogrowth setting for the database.
Specifically for the MSDB database, you need to schedule a job to run daily/weekly that cleans up the history. Create a new maintenance plan and add the History Cleanup task. Setup how much data you want to keep and schedule the plan. If the database has to grow to accomodate the amount of data you want to keep - then it has to grow.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
October 15, 2009 at 3:58 pm
From the results of your query it looks like there's no data in that table. To confirm that, do a select from it.
If there really is no data you could truncate the table, but that's something I would avoid for a system table.
The table is a heap. A great way to retrieve the space from a heap table is to add a clustered index to it (in the case of that table task_detail_id would be a great candidate, but it really doesn't matter) then immediately drop that clustered index. This method works regardless of how much data is in the table (although if there's a lot of data it can be expensive in terms of time and log file space).
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply