January 15, 2009 at 4:35 am
Please read through this - Managing Transaction Logs[/url]
I just noticed that you're backing up the log with NO_TRUNCATE. Why?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 15, 2009 at 5:01 am
GilaMonster (1/15/2009)
I just noticed that you're backing up the log with NO_TRUNCATE. Why?
Nice catch. Didn't even see that before. Ack...:sick:
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 15, 2009 at 5:29 am
GilaMonster (1/15/2009)
Please read through this - Managing Transaction Logs[/url]I just noticed that you're backing up the log with NO_TRUNCATE. Why?
oops, I missed that too :blush: Marker, that would explain why your logs keep growing
---------------------------------------------------------------------
January 15, 2009 at 5:51 am
although note that the OP has later stated he set up a maint. plan to do his trn log backups, so the script posted earlier would no longer be relevant.
MArker , did you get errors from the tran log backups or did they just not run? Is there an alert set up to kick it off as well?
---------------------------------------------------------------------
January 15, 2009 at 12:06 pm
Ten Centuries: No there is no full backup running at the time of the index rebuild.
GilaMonster: I am NOT backing up the log with NO_Truncate. The earlier post was a was never implemented due later posts recommending against it.
All that is happening is that my maintenance plan does an index rebuild at 1am on Sunday night. This is the index rebuild directly from the maintenance plan wizard which rebuilds the indexes on all the tables I'm assuming. The index rebuild grows my TRN log to 30GB. During heavy use during business hours the TRN log doesn't need to be larger than maybe 1GB. From the suggestions in this post I created a maintenance plan that backs up the TRN log every 15 minutes during the index rebuild Sunday night. I start the 15min TRN backup 15 minutes before the index rebuild. I noticed that the 15min TRN backup starts 15min before the index rebuild but does not backup the TRN log during the index rebuild. In the meantime the TRN log grows to 30GB during the 2 hour index rebuild. I also want to mention that I backup the TRN log every hour daily.
I hope this clears things up.
January 15, 2009 at 12:25 pm
Marker (1/15/2009)
I noticed that the 15min TRN backup starts 15min before the index rebuild but does not backup the TRN log during the index rebuild.
Doesn't back it up, or doesn't truncate it? The latter could be due to an open transaction or a couple of other things. What's running before the index rebuilds start?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 15, 2009 at 12:58 pm
GilaMonster: it does not backup up the TRN log during the index rebuild. I'll check the open transactions on the server this weekend.
January 15, 2009 at 1:09 pm
is it that its taking the tran log backup the full duration of the reindex job to run? what does the tran log job history record for run times of the tran log backups?
---------------------------------------------------------------------
January 15, 2009 at 1:27 pm
Ten Centuries: That's it! The TRN backup takes the duration of the index rebuild. I know that because I end up with a 30GB TRN file during the end time of the index rebuild. With that, what does that mean?
January 15, 2009 at 3:13 pm
At a guess the database consists mainly of one large table, so you have one big transaction, so one big log backup.
If so two options: use dbcc indexdefrag instead which is small transactions so you will be able to back the log up. See DBCC showcontig in BOL for a good script to defrag fragmented indexes only in a database
OR
live with it and accept you need a big log for this job. The script above could be altered to do a reindex only if the table is in fact fragmented
index defrag may not be quite as efficient as reindex in reorganising the index, but I have found it does a pretty good job. Use showcontig to check how good results are.
by the way, ten centuries is a description of my level of involvement on the forum shall we say. You may call me George 🙂
---------------------------------------------------------------------
January 16, 2009 at 12:32 am
Marker (1/15/2009)
GilaMonster: it does not backup up the TRN log during the index rebuild.
As in it runs for the duration of the index rebuild (and if that's the case, is there blocking and what's the waittype?) Check also the job (if this is a backup job) start and end times for those runs.
The TRN backup takes the duration of the index rebuild. I know that because I end up with a 30GB TRN file during the end time of the index rebuild. With that, what does that mean?
Doesn't necessarily mean it ran for the entire duration of the index rebuild. If it ran just after the rebuild finished, it would be that size.
What are you using to do the index rebuilds? Normal maint plan?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 16, 2009 at 9:23 am
GilaMonster: Yes, I am using the normal maintenance plan.
January 23, 2009 at 8:01 am
Hi everyone. I am having the same issue that Marker is having. I have recently been tasked with taking over DBA responsibilities in my department. I'm excited about the opportunity, but I am not as knowledgeable as I should be. I am taking measures to improve my knowledge base though, and have always been a frequent visitor of this site so I know where to look when I need an answer.
That being said, I am struggling with maintaining our log files. I should also mention that we are on SQL Server 2005. The real problem is the database schema, but I am not allowed to make any major changes to that right now, so I have to work with what I have. The database only has 7 tables, but the problem table has 1,267,630,717 rows and is completely denormalized. The other six tables range in size from a few records to 570,000. A few of them are set up as look up tables, but are not used in that way (i.e. no foreign keys) so data integrity is more or less a bust right now....
When I first took these responsibilities on, none of the tables had indexes or primary keys. I added primary keys to all of the tables and indexes based on the queries users are running to modify the data in the database. I could not add foreign keys because of the way they insert data into the database. They use Perl to process data and then call stored procedures to insert/delete/update data. The order of operations though is not conducive to using foreign keys unfortunately.
I apologize for the long windedness. I just wanted to provide a full background on the situation. After I added the indexes and PKs, I knew I had to rebuild and then update statistics nightly. I found an excellent post on this site regarding both of these tasks, so I did not use the Maintenance wizard, but instead use scripts for the rebuilding and updating only those indexes and statistics that require the update. The reason I am mentioning this is that I would prefer to not use DBCC INDEXDEFRAG, as George suggests, but would prefer to stick with the rebuild script that I found if it makes sense to do so.
After reading the transaction log articles linked from here and BOL, I still don't feel like I have a terrific grasp on maintaining the log file. If I back up the log file after I rebuild statistics, I will have a back up of the enlarged log file, right? Should I insert the back up log statement into the rebuild idx script? The rebuilding is done within a cursor though, so how would that work? Wouldn't I need to wait until the cursor is closed, which would leave me in the same situation I am in now with a very big log file (after 3 days, it grew to 500GB)?
What I've done so far (and I know this is not recommended) is I have truncated the log file and had to shrink the file. We didn't have any space left on the hard drive and I didn't know of any other recourse. The log file is now at 102 MB.
One more thing I should note is the database is set to full recovery mode. The users seemed to think that a weekly backup of the database would be sufficient and didn't see a need for a point in time recovery, but I have worked with databases for too long to not know that there will always be a time where a developer dropped a table or has done something catastrophic and needs to undo what he/she did. I thought setting the recovery mode to full would be the solution, but didn't really understand that I needed to perform a separate log backup more frequently to have the ability to go back in that way. I think I have a better grasp on this piece now and have scheduled the frequent log back ups.
Is this a good idea, or should I just go with what the developers have told me and not worry about the TRN log at all? Should I really just set the back up to simple? The problem is that the application that uses this database is in production now and I'm not sure everyone has fully thought out how much this application will be used or how reliable the data will need to be. Maybe I have overstepped here though.
Please let me know your thoughts, and let me know if it would be helpful to post the rebuild indexes sp that I use. Thank you all in advance!
Cathy
January 23, 2009 at 9:26 am
Cathy, I think you need to tailor you maintenance to your circumstances, It's always a problem when you have a large database consisting almost entirely of a single table. You don't mention the size of the database and the large table by the way.
simple mode will help you but will not solve all your problems now you have indexes to rebuild. Best to go back to the users and explain to them exactly what simple mode and only a weekly full backup means in terms of recovery options. If they are still happy you can go for simple mode, will save the space required for log backups at least. You still have the option of differential backups.
The biggest single cause of tran log grow is going to be the rebuild of your big table. The tran log needs to be sized to handle this. you say you do not want to reorganise, why is this? Rebuild is intrusive, do you have a big enough maintenance window to rebuild a table this size without causing blocking?
If you can use reorganise for the large table at least, you can solve your log size problems just by doing frequent log backups and adding a performance alert to kick off the log backup job if the log fills to a certain amount. remember reorganise does not update stats so add a step to do that. ensure auto update stats and auto create stats is turned on for the DB
If you want to use rebuild I would suggest nightly is too frequent. split up the rebuilds to run on different nights. Won't help too much in your case but do the large table on its own, the other 6 some other night. If some are read only look up tables and dont change set index fill factor to 100%, do them once, then leave them.
As I said if you rebuild the large table size the log to handle this. backup the log before the rebuild and kick off the rebuild of that table, and see how much log it uses .dbcc sqlperf(logspace) will tell you. So the log file does not get fragmented grow it manually to a sensible size first (at a quite time). If the disk is not big enough you will have to use reorganise or get some more disk!
Is the proc you are using one where it only rebuilds if fragmentation above a certain percentage? If so good, if not change to one where it works like that.
By good coincidence SSC posted today an excellent technet article on understanding logging.
http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx
follow the link thru to kimberley tripps article on 8 Steps to Better Transaction Log Throughput."
---------------------------------------------------------------------
January 23, 2009 at 9:44 am
Thank you George. I will definitely read the article.
The size of this particular database is 126 GB, with the largest table being a little under 69 GB.
The sp description that I use for indexing is:
ap_RebuildIndexes is a process which will assess
the level of fragmentation of all indexes in a
database and reorganize those indexes which fall
outside the specified parameters.
I found a link to this from an article written on this site regarding the steps that should be taken for DB maintenance. It was recommended to turn off update stats automatically and run a separate update, so that is what I am doing as well. Do you disagree with this approach?
The sp that reindexes does the following:
IF @fullprocess = 'ON'
BEGIN
-- If the index is more heavily fragmented, issue a REBUILD. Otherwise, REORGANIZE.
IF @currentfrag < 30
BEGIN;
SELECT @command = @command + ' REORGANIZE';
IF @partitioncount > 1
SELECT @command = @command + ' PARTITION=' + @partitionnum;
END;
IF @currentfrag >= 30
BEGIN;
SELECT @command = @command + ' REBUILD';
IF @partitioncount > 1
SELECT @command = @command + ' PARTITION=' + @partitionnum;
END;
EXEC (@command);
PRINT 'Executed ' + @command;
END;
The maintenance window is difficult to define. The process has been under development for a long time and no one was ever watching the database, so we don't know when the database gets called into action and for how long the processes last. I'm in the process of gathering all of that information. For now, the jobs that I have running are late at night and they seem to be going along ok...except for this log growth issue of course. I suppose the job could be getting blocked at run time though.
Basically, the users do not want to make a decision. They want me to make it for them and that makes me nervous. I want to err on the side of conservative wherever possible so I can cover as many bases as I can. However, I will bring what you said to me back to the users and try to push for a decision on their end.
If anything I added here changes your opinion or if you have additional insight to provide, I would greatly appreciate it. Thanks again!
Cathy
Viewing 15 posts - 16 through 30 (of 34 total)
You must be logged in to reply to this topic. Login to reply