March 18, 2009 at 3:41 am
Hi,
I am newbie to SQL Server. I want to take the backup of the sql server db to different machine other than the machine where it is installed. How can I do that? I thought to map the drive of the machine where I have to take the backup on the machine where db is installed and then take the backup using the GUI tool but not sure if it will help.
Please guide me through the process/methods of doing this?
March 18, 2009 at 6:53 am
Why are you trying to do this? The preferred method would be to backup local and then copy the backup file to a network share. It can be done using UNC paths but, at least in SQL 2000, SQL didn't recognize mapped drives. The biggest reason NOT to backup over the network is the fact that a network hiccup could kill the process, or worse, corrupt the file without your knowledge, until of course you need it in the event of disaster. Search around this site. Lot's of opinions on why or why not to do it and how to do it as well.
-- You can't be late until you show up.
March 18, 2009 at 8:04 am
With respect to previous post, backing up to local is INDEED preferred method and you would benefit more to backup to local nad then either make a copy of local backup of have your local backup folder backuped up by other third party methods.
However we have backups on both local, network share as well as another server.
You can choose in SQL 2005:
Enterprise management Studio --> Management --> Maintenance plan --> backup database task option --> choose database name (or all databases)--> backup type (Full,depending on your business need) --> choose disk instead of tape --> choose “Create a backup file for every database” --> * Folder name: \\serverName\D$\folder_name --> backup extension: Choose .bak (again if this is full backup type)
(by default folder name is usually pointed at local backup drive)
Or if you want the basic tsql for both 2000 and 2005, here is the basic command:
BACKUP DATABASE [Databae_name] TO DISK = N'\\serverName\D$\folder_name\database_name.bak' WITH NOINIT , NOUNLOAD , NAME = N'Database_name backup', NOSKIP , STATS = 10, NOFORMAT
Note: for external location of backup, you just replace server_name with network_share_name
Hope that helps
March 18, 2009 at 9:43 am
we do all of our backups to tape via Netbackup
we also tried EVault disk backup, but there are a few issues with it that we don't like
March 18, 2009 at 11:25 am
Can you tell me why you had issues with evault? You are the first one I came across on this board that have had evault experience. We currently have tape, share, local and evault backups for our production servers (always evault backup is the secondary form of backup) but we have had some issues ourselves, specially with sharepont environment backups and I need to know what other people face. Also, do you take transaction log backups on evault?
Thanks for any info,
Ellie
March 18, 2009 at 1:54 pm
we bought into Evault back in 2005 or 2006 and have been using it for files/exchange. for SQL we are still using Netbackup Enterprise Server with an ADIC Scalar 1000 robot and DLT 8000 drives.
the original plan was to migrate everything to Evault and last year we bought another vault and some storage and started doing backups of a few databases on it. Around 500GB - 600GB of data in total. our most critical databases where all the raw data goes into.
first Evault doesn't support differential backups. we don't do log and every few hours our SAN snapshots to a set of BCV's. almost every db we have is in simple recovery mode. no big deal for the critical db's, but for the reporting databases it would mean we need to buy more storage or in some cases new servers and new application licenses since they won't have room for full recovery model. one of our reporting databases has a log that's over 200GB in size.
second if you want offisite recovery you have to buy a second vault, double the storage and replicate everything. with tape if you want to restore offsite you buy a cheapo PC at best buy, install linux, netbackup, a cheap tape library and restore all you want.
maybe they updated the client, but in 6.40 there was no way to restore a database to a different client. in Netbackup i go to the server i want to restore on, open the client, pick a backup to restore, it creates a script, i edit it to change the drive paths, save and start the restore. it automatically runs the right SQL statements to restore with move and whatever. with evault we had to restore as a file and then restore from the file. Netbackup takes 4-6 hours for a 150GB database and Evault took almost a day for the entire process.
I don't really like Evault's server based policies, but it's not that big a deal. they also create a lot of data on the client which Netbackup doesn't.
LTO-4 tape is cheaper than disk. list price for HP branded tapes is $55 or close to it. disk also costs electricity to run, storing data on tape doesn't use any power once the data is there
disk isn't as cheap as Evault says, because Evault is owned by Seagate. HP will charge $500 for a 1TB SATA drive. Add the jbod and it gets more expensive.
scalability, with netbackup they support SAN backups over FC if you buy the hardware and licenses. not evault
finally security. with evault you change the wrong value and you can literally wipe out years of backed up data by accident. or if you take the drives out, all the backups are gone. think of that sys admin from san francisco who locked everyone out for weeks. with tape you just import the tape. and every organization stores tapes offsite.
evault runs on top of sql 2005 express, the data can go corrupt
we have a PO waiting for approval to buy a HP MSL 8096 tape library with 2 drives, 100 tapes, a new netbackup server, red hat linux for the OS and some minor accessories. total cost is around $35,000. another $11,000 over a few years for 200 more tapes. i ran the numbers and Evault in 2 locations will cost close to $200,000 for us for 3 years of storage. with tape you just buy more tape, with evault you are always buying more disk whenever you need more storage.
March 19, 2009 at 11:50 am
For all of you doing local backups, I'd like to know how big the database is that you are backing up and what type of disk subsystem you have (RAID number, number of physical drives, RPM speed of drives, etc).
We have a database that is nearly 100 GB in size. The database is on a RAID 1 SAS (Serially Attached SCSI) consisting of just two physical drives. The backup goes to a different RAID 0 SAS drive again consisting of just two physical drives.
The problem is, when the backup is running, the database is unresponsive, heck the entire server is unresponsive. I can't even open an application like a web browser directly on the server when a backup is running. It takes about 30 minutes to do this backup.
I've asked and looked around before, but there is no way to "throttle" a backup. So it would seem to me that this would be a more common problem for people.
Our hosting provider has a ComVault service that will do a backup over the network. When that is running, you can't even tell, the server is very responsive. Obviously this makes sense as it also takes 3 hours to do the back. I'm fine with it taking 3 hours. I'd be fine with a daily backup taking 23 hours to complete as long as it didn't cause any performance issues or adverse side effects.
The problem with using our hosting provider's ComVault service is that in the case of a serious user error and we do something to screw up the database, it is much faster to restore from a local backup then it would be to call our hosting provider and get the backup restored over the network. The only thing the ComVault backup is good for is in the case of some physical disaster. It serves as our backup that is stored somewhere off site. So we will continue to use it, but I'd also like to be able to do a local backup without effecting performance.
I don't understand how everyone can recommend doing a local backup when, at least for me, it has such negative performance impact.
March 19, 2009 at 11:59 am
do you have any database files on drive c?
and the RAID 0? is it on another channel of the RAID controller or a whole other controller?
but we did have similar problems with tape backup on an old cluster we used to have. It was 2 Compaq DL760's and we used to snapshot the data via EMC to another set of disks on another server and run the backups from that server.
March 19, 2009 at 12:25 pm
We originally had just the RAID 1 configuration consisting of two 300 GB SAS drives, partitioned into a C: drive of 25 GB and a D: drive of 275 GB. The operating system and other program files are on C: and the database files are on D:. So from a logical standpoint, no, there are not database files on the C: drive. But from a physical standpoint, yes.
We used to do the local backups on the D: drive, so it was both reading and writing to the same physical RAID 1 array. It took an hour to do the local backup that way and the server was just as unresponsive during that time.
So we added another RAID array. The Dell OpenManager program is telling me that we have a PERC 5/i Integrated RAID controller. It has a "Connector 0" and a "Connector 1". Connector 1 is empty.
I'm sorry, I must make a correction to my previous post, we do not have two other drives in a RAID 0 configuration, we just have one.
So on Connector 0, we have three physical drives. The first two are in a RAID 1 configuration and the third is by itself in a RAID 0 configuration.
To me, SQL Server should be smart enough to manage the resources during a backup such that when a backup is running and some other database request comes in, the backup should get less priority.
March 19, 2009 at 12:44 pm
i would add more disks
only thing we ever leave on the physical disks that host the OS are the tempdb files if we don't expect heavy usage. i had the same thing on a server recently. needed to clear up some space and moved a log file for a db to drive c. server was very unresponsive the whole time
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply