January 9, 2009 at 4:37 am
I have a database that is 35gb. 15gb data and 20gb log. Full recovery model. Backups are sync'd with replication.
The overnight full backup for the database is taking 10 hours!
Whats up with that?
January 9, 2009 at 6:12 am
Are you backing up to local drives or across a network?
Some rough estimates on what it would take to transfer a 35GB file at various rates across a network:
10Mbps: 8.5 hours
100Mbps: 1 hour
1000Mbps: 10 minutes
Assuming it's across a network and you aren't close to those times check the networking components between the client and server: NIC's, switches, cables.
January 9, 2009 at 6:18 am
Thats the trouble - It's backing up locally!
Actually I fibbed a bit. There are also a couple of 10gb databases backed up in that 10 hour space. But still, 7 hours until replication kicks in again is becoming a bit of trouble.
January 9, 2009 at 6:23 am
Being that they are local then you are most likely running into a disk throughput issue. What is the disk configuration for your backups? If you are using SATA type drives then you might want to think about using some compression software as it will save you a ton of time in that configuration. Allows for the use of cheap disk without making the DBA go bald. 🙂
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 9, 2009 at 6:36 am
We also have an off site facility hosting the same database (replicating to and from) and the same backup routine on this server takes just 10 minutes, so clearly a disk issue as you say. However there is 15gb less log on this server, and I can't work out why!
January 9, 2009 at 6:44 am
I'm assuming that the recovery model is full for the database. Are you doing regular log backups in both locations? Same interval? If so, then log size should be relatively close in size.
Additionally, just because the physical log file is that big doesn't mean that the db is using it all. Try running DBCC SQLPERF (LOGSPACE) to see how much is actually used in the log file.
Hope this helps.
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 9, 2009 at 7:34 am
Massively oversized, but they do use a process that builds up the transaction log a great deal. To save time on the backups is it worth scheduling a shrink each night after the big job and before the backups or will this just lead to poor fragmentation?
January 9, 2009 at 7:41 am
Avoid the shrink. That really is never the solution. More frequent log backups would be the best bet to keep the size in check. You can establish multiple schedules so that you can run more frequently during the time that they use a lot of the log. Monitor log usage at intervals (running the command I provided earlier and storing for analysis) and see how you are doing. Once you get things stabilized you could change the size to something that will cover the growth in between backups.
I would be interested to hear how things progress with your backup. Additionally, if at all possible consider getting some compression for your backups to reduce that time. You will be pleasantly surprised the gain you can get when backing up to slow disk. I know I was able to achieve the same time using compression to slow disk as I got uncompressed to the best disk around. Made it a nice savings in more ways than one.
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 9, 2009 at 8:18 am
Thanks for the tips.
Thing is though I am taking log backups every 10 mintues 24 hours a day!!
January 9, 2009 at 8:25 am
How does the free space and fragmentation level of the local drive look?
Do you see any other processes on that server that might be competing for resources?
January 9, 2009 at 8:46 am
Is this every night, or regularly, or did this happen last night? It's possible something hung up. If it happens often, either you have hardware issues or you have resource contention.
Can you run a backup to another local drive and see if that's the issue?
January 27, 2009 at 9:07 am
Hi people. An update....
It turned out back the other week that the Distribution database was taking up all the back up time and holding life up. The reason the dist db got so big was the retention setting had switched to 30+ thousand hours!!! It was set to 72, but seems to have set itself at some stage?! Not sure why this happened.
Anyway thanks for the other useful pointers for future reference! 😀
January 27, 2009 at 9:35 am
Thanks for the update and glad it's working. Definitely the distribution db can cause some retention issues with that setting, or with subscribers disappearing.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply