January 28, 2005 at 9:12 am
As for transaction log backups, if you have more than one tlog and you want to be sure one tlog in particular is backed up, is there a way to do so?
For example, is there something akin to "backup log db_log_2 To backupdevice"?
I have a db with 2 tlogs, 1 was backed up last night but the other doesn't seem to have been.
January 28, 2005 at 9:58 am
No, you cannot do that. SQL Server will write to transaction log in a sequential manner. It will start writing to the first log and after it gets filled up write to the second. Once the second is filled, it will come back to the first and check if it is empty. If not, the size will be increased(dependent on the increment) and repeat the same with the second.
Your log backup should have all the entries.
January 28, 2005 at 10:14 am
Gotcha.
If both are set to auto grow and they are on the same disk, why would someone have set up 2 logs?
January 28, 2005 at 10:25 am
Are you sure that the 2nd file is associated with the database and that it is a transaction log? The name, integration_log, would seem to indicate that it is a transaction log, but you can name data files and transaction log files any way you want.
To verify, in Enterprise Manager, right click on the database name and select properties. Click on Data files and see what is listed there, and click on Transaction log and see what's listed there.
A little info about transaction logs - when you have multiple transaction log files, the transactions are generally evenly spread across the files. Nothing in "file 1" will exist in "file 2". When you take a transaction log backup, all transaction logs should be backed up and truncated in the single backup operation. No special procedures required. The fact that your "file 2" did not get backed up or truncated is what suggested to me that the file is not an active transaction log for the database.
If the file is a 2nd transaction log for the database in question, the person that set this all up apparently was not an experienced SQL DBA. They probably set up the "new" log because the "old" one filled up and they didn't know how to handle it.
Can you verify that the integration_log is specified in the database as a transaction log file, and then post back with your findings?
Steve
January 28, 2005 at 10:32 am
In a heavy load, having two log files, perhaps on two disks, can help performance.
January 28, 2005 at 10:34 am
Yes, I double checked and the integration_log is specified in the db as a tlog.
January 28, 2005 at 10:39 am
And your transaction log backup completed successfully?
If you view the database (in Enterprise Manager) using taskpad view, what information does it give you about the transaction log? Total size for each file, and amount used for each file?
Steve
January 28, 2005 at 10:58 am
It has:
Transaction log space: 49381.05MB and then breaks it down in used: 653.85 and free: 48727.2MB.
It doesn't break them down by tlog (where I'm looking anyway) and the "free" part interests me. I tried a dbcc shrinkfile command on the integration log this morning and it only shrank by a few hundred mb's.
Is that what info you wanted?
January 28, 2005 at 11:13 am
Yes, that helps. You have a total of 48GB of transaction log, 653.85MB currently in use according to task pad. You shrank the log a few hundred mbs, so that confirms that you are indeed working with a log on the current db. Now, a transaction log is made up of many "virtual logs". As the transaction log is backed up and truncated, the inactive transactions are removed from the virtual logs. If the remaining active transactions are in a virtual log that is near the end of the log file, the log file cannot be significantly shrunk. So, what you must do is either wait until the active transactions are at the beginning, or force them to the beginning (by running a lot of "dummy" transactions). For more information about this, see Books Online, in the index type transaction log, then select "active portion", and also "architecture, Transaction Log Physical Architecture". I have a script that will do this for you, but the script will truncate the transactions from the log without saving them in a backup. Based on what you said in your other thread, this would be unacceptable. Its possible that you may be able to look at the script and use parts of it to accomplish your goal. If you want to see it, send me a private message and I will send it to you.
Steve
January 28, 2005 at 12:35 pm
Just to be clear, you don't think that backing up the transaction logs tonite will remove those inactive transactions?
What doesn't make sense to me (again, I'm not a dba, just helping some folks out) is that the first tlog was the issue, it was backed up fine and it's back down to 4GB. The same night the 2nd tlog became 48GB in size and it was only 7GB before. Why did the second one grow, was it b/c the tlog backup job was running when the db backup job started?
January 28, 2005 at 1:44 pm
Try to backup the log file and then shrink it. Perform this excercise a couple of times and all the space should be released.
January 28, 2005 at 2:26 pm
>>Just to be clear, you don't think that backing up the transaction logs tonite will remove those inactive transactions?
Yeah, it might. That really depends on whether the transactions have committed to the database. You can possibly check that out by running dbcc opentran. This will tell you if the database has any open transactions, and if it does, will give you details about the oldest one. Check Books Online for details. If the database is fairly active, chances are good that the transactions will have committed, and tonight's backup will truncate them.
The tlog backup job running when the db backup job started should have no impact at all on the tlog size. Only one backup operation can execute on a database at a given time, so the db backup job would have been blocked until the tlog backup job finished.
As to why the 2nd file grew so much, that's a mystery. Are you running any other maintenance in your backup jobs, such as reindexes, statistic updates, integrity checks? These things can cause log growth. Reindex uses a lot of log space, and is typically a single transaction. I suppose if a reindex occured, it may very well all go to a single tlog file, but I'm not sure about that.
What seems ironic is that you initially stated that the tlog was 50GB, and it is still about that size, according to the taskpad view. That would indicate that no tlog file physically grew throughout the process. Secondly, you stated that the 2nd file grew to 48GB, which is very close to the original stated 50GB. To me, this sounds like the typical behavior of a tlog backup file, especially one that is being appended rather than overwritten, or newly created.
Steve
January 28, 2005 at 2:43 pm
By the way, regarding the script that I mentioned earlier... I just now had occasion to use it myself. (I shrank a 2GB transaction log to 100MB). In looking at the usage, it has a parameter to specify how to do the backup log (this is how it truncates). The default action is to truncate_only (throw 'em away), but you can change that to cause it to create a backup file. It should be noted, though, it CAN iterate up to 1000 times, so you could end up with a lot of tlog backups. It doesn't usually go more than a few iterations though.
Steve
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply