October 26, 2006 at 12:20 am
October 27, 2006 at 6:24 am
To answer the second question first, I'm betting the USB connector is the issue for the slowness of the backup. USB isn't quite up to speed with other disk drive connectors. It works for small files, but even then, if you do a timing test between a local drive and a USB drive, you'll notice a small file still takes longer on USB than it does on IDE/SCSI. I have to wonder if USB is a serial like device or a parallel device. It's "suppposed" to be parallel...
As far as your first question goes, a differential backup doesn't just backup the inserts on your large table. It's backing up everything on the database that changes. Inserts, Deletes, and an insert & delete for every update. If you have a lot of activity on your DB, it's no wonder your differential is so huge.
At my workplace, we stick with hourly Transaction Log backups and daily Full backups. Our DB is approximately 21 GB with a lot of activity during the day, so this works best for us instead of attempting a differential. And we have tried differential. It just doesn't work effectively on DBs with a lot of activity (as far as we're concerned).
Hope this helps.
October 27, 2006 at 7:50 am
i had a similar issue on some very large archive databases we have a few months back and we use Veritas which uses the SQL API's anyway. Sun, our support contact couldn't help.
I did some research and the best answer I came up with is that updates are affecting most pages and extents which means the entire page or extent needs to be backed up in a diff backup.
October 27, 2006 at 7:56 am
There is another option. Make some of your filegroups READ ONLY, then backup only the filegroups that are READ/WRITE. @=) (see Partial Backups in BOL)
Or do put your large table on its own filegroup and backup that filegroup separately from everything else.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply