December 27, 2007 at 5:55 am
hi all,
I am using SQL server 2000. My databse file MyDB_DATA.MDF grew up to 60161 MB.
please suggest me should I reduce the size of the file, and how it can be.
I have about 2,00,00,000 records in a table which are being queried.
More over I have 2 more tables of same specs mentioned above.
Queries are taking lot of time. usually more then 30 minutes to execute.
please suggest me, what stuff should be considered for an efficient query.
I am not joining the tables rather quering them sepately.
Should I keep only one table ? Do the above situation creates a lot of burden on SQL server?
do the efficiency increases by removing extra tables?
any help ??
December 27, 2007 at 8:07 am
Run this script to find out how much free space you have in your database.
-----------------------------------------------------------------
Set Nocount On
Create Table ##filestats (
DbName varchar(100) NULL,
FileID int NULL,
FileGroupID int NULL,
FileTotalSizeMB dec(19,4) NULL,
FileUsedSpaceMB dec(19,4) NULL,
FileFreeSpaceMB dec(19,4) NULL,
FileLogicalName varchar(100) NULL,
FilePath varchar(100) NULL
)
Exec sp_MSForeachDB
--@command1 = 'Use ?; DBCC UPDATEUSAGE(0)',
@command1 = 'Use ?;Insert ##filestats (FileID, FileGroupID, FileTotalSizeMB, FileUsedSpaceMB, FileLogicalName, FilePath) exec (''DBCC SHOWFILESTATS WITH NO_INFOMSGS ''); update ##filestats set dbname = ''?'' where dbname is null'
-- remove any db's that we don't care about monitoring
Delete From ##filestats where charindex(dbname, 'master-model-pubs-northwind-distribution-msdb') > 0
Update ##filestats set FileTotalSizeMB = Round(FileTotalSizeMB*64/1024,2), FileUsedSpaceMB = Round(FileUsedSpaceMB*64/1024,2)
where FileFreeSpaceMB is null
Update ##filestats set FileFreeSpaceMB = FileTotalSizeMB - FileUsedSpaceMB
where FileFreeSpaceMB is null
Select dbname, fileusedspacemb, filefreespacemb from ##filestats
Drop table ##filestats
Set Nocount Off
-----------------------------------------------------------------
fileusedspacemb and filefreespacemb will show you how much space your database is actually using and how much space there is free.
you can then run
DBCC SHRINKFILE (N'MyDB_DATA.MDF' , 0, TRUNCATEONLY) - -this will free up filefreespacemb
or
DBCC SHRINKFILE (N'MyDB_DATA.MDF' , 'number of MB you want to shrink it to')
you should create indexes on the columns that you use in your where clause.
December 27, 2007 at 9:13 am
Extra data can have an impact, but only in the data cache. SQL can easily handle 60GB of data in a few tables.
Are your tables indexed? They should be indexed on columns used in WHERE clauses as well as join conditions.
If you need the data, then it should stay. If you have extra copies to speed things up, that doesn't help.
December 28, 2007 at 3:09 am
shrinking a mdf file can have serious detrimental effects on your database, if you must shrink the file you should rebuild all your indexes afterwards to get the best chance of contiguous data storage and minimise mixed extents.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
December 28, 2007 at 9:32 am
I agree with SSCommitted. I wouldn't just shrink your DB just because it is large. I also agree with Steve. 60 GB isn't beyond SQL Server's capabilities by any means.
If there is a large amount of unused space in your DB, you should probably look at why. If it grew to 60 GB but now has a large amount of free space chances are good that if you shrink it it will simply grow again. The shrinking and growing will bog down your server if it is done continuously.
You mentioned a couple things that made me think you might have design issues. First you mentioned that you have one really large table and two more tables with the same specs. Second you asked if you should "keep only one table".
It sounds like your database design might not be normalized very well. If you post your table structures, indexes and a brief description of what you are doing with both as well as a sample query that is taking 30 minutes you might be able to get some good suggestions here.
December 28, 2007 at 12:58 pm
have about 2,00,00,000 records in a table which are being queried.
More over I have 2 more tables of same specs mentioned above.
Queries are taking lot of time. usually more then 30 minutes to execute.
please suggest me, what stuff should be considered for an efficient query.
I am not joining the tables rather quering them sepately.
Without knowing the table schema, what the indexes are, how often you maintain them, what the code is, and what the actual data looks like, the simple answer would be that you need to get better at all of that... particullarly where the code and the indexes come into play.
If you want help, please help us help you... do read the following if you actually want an answer that helps...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
--Jeff Moden
Change is inevitable... Change for the better is not.
December 28, 2007 at 1:45 pm
Hello,
Depending on the data kept you may want to divide it up amongst other tables if possible
Your queries may be iterating through chunks of data you dont really care about.
Have you looked at execution plans of you scripts to see what they are doing? This should really help you out.
Try using Profiler and the Tuning Wizard to give you some help also
Thanks
Ian
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply