September 21, 2023 at 9:46 am
1 of my production database's transaction log file is full due to some backup failure occurred overnight. This has caused the logs to grew until it ate up all available space. Our backup admin has manually kick start the backup, it is running at the moment but it's taking very long as the transaction log file is 2TB in size.
I am thinking of adding another Tlog file to the database as there's still plenty of physical space on the drive. This is to allow the database to function normally while backup is running. Is this recommended? I can't extend the current log file as it's already max at 2TB.
September 21, 2023 at 9:33 pm
Sorry... posted to the wrong question and removed this post. See the next post down for my answer on this posted problem.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 21, 2023 at 9:43 pm
On the subject of the backup failure...
IMHO (and experience but YMMV), it's a complete waste of time to backup a 2TB log file (happened to me the 2nd day of my current employment but was only 250GB).
Take another full backup, set the recovery model to SIMPLE, do a CHECKPOINT or two, set the database back to the FULL recovery model, and take a DIF backup (or another FULL, if you prefer) to restart the logfile chain.
Then, make sure you get alerts if the logfile backup fails in the future.
As a bit of a sidebar, I'd look into my index maintenance routine and eliminate the use of REORGANIZE. It's not the "lower resource, log file friendly" tool that people say it is. Like I tell and have proven to people, "It's better to do no index maintenance than it is to do it wrong... And, if you're generically using REORGANIZE, you're doing it wrong".
--Jeff Moden
Change is inevitable... Change for the better is not.
September 26, 2023 at 4:15 am
On the subject of the backup failure...
IMHO (and experience but YMMV), it's a complete waste of time to backup a 2TB log file (happened to me the 2nd day of my current employment but was only 250GB).
Take another full backup, set the recovery model to SIMPLE, do a CHECKPOINT or two, set the database back to the FULL recovery model, and take a DIF backup (or another FULL, if you prefer) to restart the logfile chain.
Then, make sure you get alerts if the logfile backup fails in the future.
As a bit of a sidebar, I'd look into my index maintenance routine and eliminate the use of REORGANIZE. It's not the "lower resource, log file friendly" tool that people say it is. Like I tell and have proven to people, "It's better to do no index maintenance than it is to do it wrong... And, if you're generically using REORGANIZE, you're doing it wrong".
Thanks Jeff for the reply.
After some further investigation I've found that the transaction log grew huge due to index rebuild jobs which occurred to 3 times a week. During those days our each tlog size was more than 200GB compared to less than 10GB on normal days, this somehow impacts the speed of the backups. Apparently when the rebuild job, perform a rebuild on some super huge indexes alot of TLog was generated as we're using full recovery model. For now I've exclude several of these large indexes from the index rebuild job & perhaps schedule a downtime to do rebuild with simple recovery model.
September 26, 2023 at 9:07 am
I really should have a sticky on Jeff Moden's post about the (non)-sense of generic rebuilding / reorganizing indexes.
A lot of resources are consumed during the reindex with little or no gain.
The statitics updates still matter.
September 30, 2023 at 5:09 am
Good information from Jeff. I would add to improve your disk system / LUNS, and network speed and reliability (good network topology is important). Little things can help. I have had similar experience in the data center arena. Efficient anti-virus/firewall could help as well.
DBASupport
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply