October 12, 2011 at 4:10 pm
Hello,
I'm working with a database that has started to grow a lot, and is not 140GB, with a 27GB log file on the same 170GB disk.
I'm new to the company and warned my boss this was going to happen over 3 months ago, but, hey, I'm not bitter.
So, the short term solution has been to shrink the logfile each day, it gets down to about 7GB. So much for transaction log backups.
I am also deleting all log tables data over 2 days old.
The short term solution I have put together with our provider is to drop another unused DB vm, and move over a 200GB drive. I will probably move the .mdf there and leave the .ldf where it is, and reattach the db with this setup. Any suggestions of how I can do this with minimal downtime?
What are good long term solutions? I have thought of attempting to design a horizontal partitioning strategy with our Orders table, hosting data over a year on another server, or at least another drive. Any other suggestions on maintaining a high-transaction online db like this? Any good articles on doing this with sql server 2005/2008?
Thanks in advance!
-Colin
October 12, 2011 at 4:40 pm
Consider just adding another file on a new drive and letting the data flow over into that. Then restrict growth on the original .mdf to it's current size.
Another option is to rebuilt the clusterd index on another filegroup. This will alow you to move individual tables out of the primary file group onto a filegroup on another drive, without taking everything off line. If you move archive tables (if they exist) you could possibly do this without affecting users. Or use this option to move lots of smaller tables during your maintenance period.
You can also move the database by doing a backup and restore under a diffeent database name and folder location, then apply tranlogs to get the two databases in sync. You then only need to take the app off line for long enough to do the last tran backup and restore, then rename the databases.
This gives a good roll back if it doesn't work you undo the renaming, and only drop the old database once you are 100% happy everything is working.
It would concern me that you have a high transaction online db that's been managed this way. Sooner or later the drive is going to fill up and you won't even have space to do a checkpoint and backup the log. I'd cover my back and place a dummy, 100 MB file on the drive that you can delete if you need emergency space.
Cheers
Leo
Leo
Nothing in life is ever so complicated that with a little work it can't be made more complicated.
October 17, 2011 at 10:17 pm
Great advice. I have taken some steps towards what you have suggested.
It seems that when I recreate the index, it just grows... it does not actually move the table:
ALTER TABLE Carts
DROP CONSTRAINT [PK_RowID_Carts] WITH (MOVE TO SECONDARY)
GO
ALTER TABLE Carts
ADD CONSTRAINT [PK_RowID_Carts] PRIMARY KEY CLUSTERED
( id ASC
)WITH (IGNORE_DUP_KEY = OFF) ON SECONDARY
GO
I assume that this will cause the table to continue growing on the new filegroup?
Is there a way to actually move a table to the secondary filegroup? Would moving a table from where it grew on the disk, to another disk cause issues?
Thanks,
-C
October 18, 2011 at 1:40 am
with a 27GB log file
It's too much for log files. Are you not taking log backups frequently ? What's the database recovery model?
short term solution has been to shrink the logfile each day
It will not help you because if the space required SQL Server will add VLFs to your log files (assuming auto increment). It will kill the performance.
Typically, truncation occurs automatically under the simple recovery model when database is backed up and under the full recovery model when the transaction log is backed up. However, truncation can be delayed by a number of factors.
Factors That Can Delay Log Truncation.
October 18, 2011 at 2:02 am
colin_sloss (10/17/2011)
Is there a way to actually move a table to the secondary filegroup?
Dropping a clustered index and recreating it on a different filegroup will move the table to that filegroup. What makes you think that the table has not moved? Was the primary key constraint clustered before you dropped it?
Dev @ +91 973 913 6683 (10/18/2011)
It's too much for log files.
Not necessarily. A log file needs to be as big as it needs to be. If you have a large table, then just rebuilding indexes on that table can cause enormous log growth, for example.
Dev @ +91 973 913 6683 (10/18/2011)
Typically, truncation occurs automatically under the simple recovery model when database is backed up
No, truncation occurs at a checkpoint in simple recovery mode. One of the links in the page you linked to explains that.
John
October 18, 2011 at 7:33 am
Dev @ +91 973 913 6683 (10/18/2011)
It's too much for log files.Not necessarily. A log file needs to be as big as it needs to be. If you have a large table, then just rebuilding indexes on that table can cause enormous log growth, for example.
It was a general statement. As a best practice we try to keep log file as small as possible.
Dev @ +91 973 913 6683 (10/18/2011)
Typically, truncation occurs automatically under the simple recovery model when database is backed upNo, truncation occurs at a checkpoint in simple recovery mode. One of the links in the page you linked to explains that.
The above statement is a snippet from MSDN article Shrinking the Transaction Log
http://msdn.microsoft.com/en-us/library/ms178037.aspx
This is contradictory to another MSDN article Transaction Log Truncation
http://msdn.microsoft.com/en-us/library/ms189085.aspx
But I accept your point. I didn’t verify that statement because it was copied from MSDN.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply