January 13, 2009 at 9:51 am
I'm curious how many people verify their backups. I found that step can take twice as long as performing the backup itself.
January 13, 2009 at 9:52 am
I both like and don't like the concept of the backup file containing the date of backup, right now I don't code the date into the file name on our systems, and I don't use maintenance plans for backups. It's nice in that you can look at the file and see the database name and backup date, but the file has a timestamp that also shows the date, so the information almost seems redundant since the file timestamp is not altered when the file is restored. Plus, the thought of digging in to the system tables (though I love working with system tables!) to find file names to delete just really bugs me for some reason. I prefer to have my system just overwrite the backup and not worry about it. On occasion a backup will fail and something will have squirreled-up the backup file, in which case I delete it and do a backup with overwrite and that seems to take care of it.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
January 13, 2009 at 9:58 am
don.schaeffer (1/13/2009)
I'm curious how many people verify their backups. I found that step can take twice as long as performing the backup itself.
I always verify all backups at the SQL Server level, but that doesn't verify against the data, just that the backup is structurally sound. My guy who does the actual Tivoli backups confirmed that he does compares, but it might just be at the computed CRC value. Tivoli is a pretty bloody complex system, he said he'd have to research exactly what a verify consists of to give me a better answer.
Yeah, it could easily double your backup time, though with compressed backups....
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
January 13, 2009 at 10:24 am
All I have to say is:
If you do not want to get hit by a train ...
do not sit on the railroad tracks !
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
January 13, 2009 at 10:33 am
Just remember your backup is only as good as the restore of the data. If the restore fails then as far as the customer is concern you didn't backup their data.:D
January 13, 2009 at 10:51 am
I do each backup to separate files. Name with DB name and datetime YYYYMMDDHHMM (hours and minutes matters with log backups). Single file = single point of failure = not a good idea.
I test the backups at least weekly.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
January 13, 2009 at 11:14 am
Yes you can setup to go to minutes
January 13, 2009 at 12:00 pm
Having seen corrupt backup catalogs, disk failures, and the like, I prefer simplicity on recovery; so my vote and practice is to separate the backups, not over-write existing files, and permanantly archive date-named regular intervals.
January 13, 2009 at 12:25 pm
Always run a verification. It is also a real good idea to run restores periodically.
One of the worst feelings I have had was pulling a backup from tape (approx 6 hours) and restoring (another 6 hours) to find the database could not be brought back on-line. After a call to Microsoft support we determined the best course of action was a previous days backup.
It was a very good thing the backup from the previous day worked, and the nature of the business allowed re-processing the missing day's work.
January 13, 2009 at 12:42 pm
This is funny to me, because until my 1st 2k8 installation Christmas week, I didn't know that SQL Server would actually append to an existing disk file. That didn't stop me from anally not selecting 'Append to existing media" when doing a manual backup.
Size, simplicity, manual file manipulation. Those are all good enough reasons for me. I wouldn't want to be the stranger picking up the pieces that came across something named "ALL_SYSTEM_DBS.BAK." There really isn't any difference between multiple DBs in a single file, and multiple files in a single folder.
I'll stick to separate files.
January 13, 2009 at 2:21 pm
I use the following strategy : At full backup time, two new file name are generated (including a timestamp in the name): one for full backups, one for log backups. Full backup is done to this new file, and log backup initialize log backup file with a first log backup. All other logs backups are appended to this file all the day.
To apply this discipline automatically, I put on SourceForge a free solution (YourSqlDba) for maintenance and backups. to do maintenance, optimizations and backups. It is all T-SQL Script based, and articulate around Sql Agent and database mail mechanisms. This solution has its own database and T-SQL code, with a detailed logging system for maintenance instructions and an email system that let you know when things go well as well than when things goes wrong.
By mailing if something goes wrong (ex: when a backup file can't be written), DBA knows that backups needs to be taken now again.
It is important not to mix complete and log backup into the same backup file, because of the risk of overwriting backup content.
This system allow to have many log backups to help recovery from a previous complete backup. By default only the complete backups of the day are kept, and week log backups. These files all bears different name, so it avoids accidental overwrite when copied. Full backup must be copied each day to another media to avoid filling local backup space. This is quite easy to do this by adding a second step to the sql agent's job that does the backup, especially for that sole purpose.
January 14, 2009 at 6:56 am
Wayne West (1/13/2009)
I both like and don't like the concept of the backup file containing the date of backup, right now I don't code the date into the file name on our systems, and I don't use maintenance plans for backups. It's nice in that you can look at the file and see the database name and backup date, but the file has a timestamp that also shows the date, so the information almost seems redundant since the file timestamp is not altered when the file is restored. Plus, the thought of digging in to the system tables (though I love working with system tables!) to find file names to delete just really bugs me for some reason. I prefer to have my system just overwrite the backup and not worry about it. On occasion a backup will fail and something will have squirreled-up the backup file, in which case I delete it and do a backup with overwrite and that seems to take care of it.
Hi Wayne,
I am clearly not as salted with SQL Server as what you are but I use maintenance plans to do my backups and it uses the dbname_yyyymmdd_sequencenumber as file name and I also have a maintenance plan to cleanup old backups so I have little to do in the way of backups. I come in every morning and do some checks on the database to make sure backups is ok etc. Now I suppose you will laugh at me but I have one user database that is about 2 GB in size. I dunno what I'll do with a 700 gb database.
I have never before heard even of multiple backups in one file so al my backups, user db, test db, system db's and other goes into different folders.
:-PManie Verster
Developer
Johannesburg
South Africa
I can do all things through Christ who strengthens me. - Holy Bible
I am a man of fixed and unbending principles, the first of which is to be flexible at all times. - Everett Mckinley Dirkson (Well, I am trying. - Manie Verster)
January 14, 2009 at 11:05 pm
I also created a huge proc and database to track backups. My backup proc is 1500 lines of code and can do litespeed or native backups and can do just about anything including simple/debug logging. I backup about 1.5 tb of data a day and a few of the databases have log backups every 5-15 minutes. I have a file per diff or full backup and one daily log file I append to for log backups...just for convenience to not have tons of files to deal with if I ever have the need.
I verify most of the databases, but often I don't have the time and want to offload the work to another server to verify at some point. I restore our most important database after the backup is done as a way of verification and to use the database for ad-hoc querying by our team. Everything is backed up to another server (SAN) and restored to on yet another server. A little speed is lost, but I backup to more than one file at a time (a proc parameter) to speed things up a bit.
I will try to post my backup script on my site soon for anyone that is interested.
Chuck Lathrope
January 15, 2009 at 8:50 am
Manie Verster (1/14/2009)
... I am clearly not as salted with SQL Server as what you are but I use maintenance plans to do my backups and it uses the dbname_yyyymmdd_sequencenumber as file name and I also have a maintenance plan to cleanup old backups so I have little to do in the way of backups. I come in every morning and do some checks on the database to make sure backups is ok etc. Now I suppose you will laugh at me but I have one user database that is about 2 GB in size. I dunno what I'll do with a 700 gb database. ...
I ain't that salty! Haven't even hit 50 yet, though it's disturbingly close. :hehe:
I don't mind maintenance plans for backups, they do those fairly well. I do not like them for DBCCs, and it seems that most of the maintenance plans that I see people want them to do everything. That's fine, as far as it goes, but I want more details. I do full DBCCs every work night on everything (sp_msforeachdb, except for one server) via OSQL and send the results to a text file. I have an aging system to keep the last ten runs. I then pipe the results through Find looking for "errors" and that file is grabbed by a nightly Perl script so that I see a couple of dozen lines of DBCC results rather than thousands of lines. If I see non-zero numbers in that result set, I can probe deeper into the full file that's on the server.
This does two things for me. First, detailed information. As far as I've seen in 2000, maintenance plans don't give detailed results in their logs. Second, I wouldn't have confidence that a maintenance plan would pick up new databases whereas I know sp_msforeachdb will catch everything. But for backups, I guess they'd be OK.
Old habits die hard. I spent my formative years as a DBA in a 4.21/6.5 environment where you did everything by hand. It warn't like them fancy gewgaws and wizards them young kids got now! GET OFF MY LAWN! 😀
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
January 15, 2009 at 9:14 am
Wayne West (1/15/2009)
Old habits die hard. I spent my formative years as a DBA in a 4.21/6.5 environment where you did everything by hand. It warn't like them fancy gewgaws and wizards them young kids got now! GET OFF MY LAWN! 😀
Um, Wayne. You're standing in the neighbors' yard. It's time to come back inside and take your meds. 😛
Seriously, though, I'm with Mr. West. The only thing that I might use the wizards for would be to generate a script, or create objects in a dev environment that I script out. All T-SQL to be run in production is subject to review from source control (several times by me and several more times by others) BEFORE it's executed anyway... right?
Viewing 15 posts - 31 through 45 (of 54 total)
You must be logged in to reply to this topic. Login to reply