September 26, 2010 at 6:26 am
I am trying to script out a backup, zip, copy unzip and restore sequence. Need to backup about 17 DBs (largest is around 10GB) and restore them on another system. The two Systems are in different networks so I can't backup and restore from one to the other.
I thought that I could save some copy time by shrinking the database first to get the unused space out of the files.. here is what I did: I ran each set of commands against all databases with a FULL BACKUP before and after:
DBCC SHRINKDATABASE (mydb, 10)
GO
--sp_helpdb to verify log size, FileID and recovery Mode of the log
sp_helpdb mydb
--If necessary, change recovery mode to Simple
Use mydb
alter database mydb set recovery simple
--Backup LOG with NO_LOG
Use mydb
BACKUP LOG mydb WITH NO_LOG
--Shrink log file
Use mydb
DBCC SHRINKFILE (2)
--If necessary, change recovery mode back to FULL
Use mydb
alter database mydb set recovery FULL
seems like I made it worse and a couple of the backup files are actually larger. any suggestions on making the whole process quicker????????
September 26, 2010 at 7:11 am
Backups don't contain the empty space, so shrinking won't help, and is detrimental to the database due to the fragmentation it causes. Your indexes may now need to be defragmented.
After setting databases to full, you need to take a full backup to restart the log chain that got broken when set to simple.
September 26, 2010 at 5:43 pm
I did take a full backup after and I will defrag the indexes.. Thank you : )
any suggestions for getting the backups any smaller ?
September 26, 2010 at 6:06 pm
What got bigger ? The .MDF or .LDF ?
September 26, 2010 at 6:21 pm
Jpotucek (9/26/2010)
I did take a full backup after and I will defrag the indexes.. Thank you : )any suggestions for getting the backups any smaller ?
Use a compression tool to compress the backup files. Or, download and install Quest Litespeed or Redgate's SQL Backup and use either of those tools to backup your databases.
If you want to speed up the transfer of the files across the network - look into a tool called RocketStream. This will copy files across the network at a much higher rate than normal copying. I use this tool to copy my compressed 160GB backup file from CA to TX - which only took about 5 hours across the WAN.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
September 26, 2010 at 6:26 pm
Thank you for the tips. I'll check them out.
September 26, 2010 at 6:32 pm
homebrew01 (9/26/2010)
What got bigger ? The .MDF or .LDF ?
The MDF grew by several GB.
September 27, 2010 at 12:59 am
The only way I can see to get those copied files much smaller is to use a more efficient compression technique. Zip offers several levels of compression (higher ones requiring more CPU time to compress the file)--I've also found that the native compression format offered by 7-zip compresses SQL backups better than conventional Zip does.
September 27, 2010 at 2:39 am
paul.knibbs (9/27/2010)
The only way I can see to get those copied files much smaller is to use a more efficient compression technique. Zip offers several levels of compression (higher ones requiring more CPU time to compress the file)--I've also found that the native compression format offered by 7-zip compresses SQL backups better than conventional Zip does.
I agree with Paul. I always use 7-zip to compress my backups (and DON'T use compression from within SQL as it is slower, less efficient and then can't be compressed further as effectively). I can get a full backup of one of my 20GB databases under 1GB like this. And if you use the LZMA2 option in 7-zip then you can use more threads for the compression, which is great on my core i7 (LZMA only seems to support 2 threads). 🙂
September 27, 2010 at 2:51 am
7Zip is the way to go
September 27, 2010 at 7:12 am
i have 7ZIP 4.6.5 and I don't have the LZMA2 option in 7-zip ????
September 27, 2010 at 7:31 am
Guessing that's a feature of the beta version of 7-zip--can't find any evidence that 4.65 supports it.
September 27, 2010 at 8:13 am
Speeding transfer across network by transferring the compressed version:
Use any compression tool in its fastest mode that:
A) writes the initial compressed file directly to the target directory
and
B) compresses, in your particular situation and with a given setup, at a "reads uncompressed data" rate greater than your normal network file copy operates at.
Another potentially useful tool for space issues is Pismo File Mount Audit Package[/url], which allows mounting an archive as a (mostly read-only) drive letter, so you can, for instance, restore directly from the 7z compressed backup to SQL Server.
i.e. instead of:
Unzip compressed backup (read compressed, write actual)
Restore backup
Zero-write file (skip with Instant File Initialization)
Read and Write actual data
you can get to:
Mount compressed backup
Restore backup
Zero-write file (skip with Instant File Initialization)
Read compressed, Write actual data
7-Zip has a variety of modes; my own testing shows 4.65 LZMA has poor parallelism, while 4.65 BZip2 has good parallelism, and 9.16 Beta also has good parallelism; I've tested both of these with the delibreate choice of 6 threads on a hex core AMD, all 6 cores pegged at 100%; one can always choose to allocate fewer threads so as to leave room for other processes (and use a batch file with a "start" command and an affinity mask to deliberately allocate only certain cores, if you like).
Note that on said hex core AMD, a test from local RAMdisk to local RAMdisk on a completely compressible file with the 9.16 Beta's LZMA2 and 6 threads showed a continuous read rate of 300MB/s, triple gigabit's theoretical maximum.
If you're going to keep compressed archives of multiple files, I'd recommend considering keeping some ECC data as well, as compression tends to make the set more sensitive to errors. I use par2-tbb from Churchsoft for this; it also has very good parallelism for creation (for verify, unless you have an SSD or a high IOPS RAID, I recommend using -t- to turn off multithreading).
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply