February 4, 2016 at 12:19 pm
Steve Jones - SSC Editor (2/4/2016)
In most cases this works well. In a DR situation, it could be problematic, though I'd hope you keep the previous snapshot before taking another. However you are then limited to the time of the snapshot, unless you also have log backups from SQL Server.
Keeping the previous snapshot, or even a dozen previous snapshots (which BTW would be terrible for performance) will still not provide a reliable disaster recovery. As you already sayd, a SAN snapshot is copy on write, so any block on the volume that never changes still exists only a single time even if you have a million snapshots. And if the drive fails on that block, you are completely "attached to another object by an inclined plane, wrapped helically around an axis".
February 4, 2016 at 1:20 pm
True, it's not DR, but it's better than nothing. Always keep an old backup while the new one is run.
I wouldn't recommend snapshots as DR, nor did I mean to imply that if you read it that way. Just that if you use them, you do want to not depend on the currently running one.
February 4, 2016 at 3:48 pm
Steve Jones - SSC Editor (2/4/2016)
If backup exec writes the backup time before it's finished, that's an issue. Certainly you might not be able to rely on this for restores at that point.I think Idera, Redgate, Quest, maybe a few others have been solid in using VSS backups that focus on SQL Server. Most of the others I've seen are unreliable.
In terms of SAN snapshots, as Hugo mentioned, these can work well and appear to be instantaneous, but from what I've seen, these work like Database Snapshots. A copy is made by moving blocks, and a read of the copy is either from a copied block, or the original block is it hasn't been copied. Blocks move as they change on the source, so it's a copy on write.
In most cases this works well. In a DR situation, it could be problematic, though I'd hope you keep the previous snapshot before taking another. However you are then limited to the time of the snapshot, unless you also have log backups from SQL Server.
I'm pretty sure backup exec actually sends raw TSQL backup commands to the database rather than quiescing the database and snapshotting files, again I could be wrong
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
February 4, 2016 at 5:29 pm
If BackupExec sends raw T-SQL, that's weird. Do they then whack the local files once they've copied them to their own system or are they left there?
Nothing wrong with using raw T-SQL, and if that's the case, I think you're paying a bit for the organization of backups, but that's fine.
February 4, 2016 at 9:51 pm
My company used Backup Exec years ago before I worked here. It took SQL backups and moved them, but the entries in the system tables were weird. I don't have anything to reference any more, but they would write them to a location I couldn't determine. It also marked the database as being backed in MSDB up before the backup was complete. The network team understood backups well, but not database backups.
The network team also tried to use the SAN snapshot software to take backups for a while, but it made each database unavailable for a brief time. I didn't do a tremendous amount of research into it, but it was almost as if it was putting it in a "paused" state. This didn't make any entries in MSDB and there were no log backups at all.
We've used native SQL backups for years now and everything's fine. My data and log files are on SAN volumes and I backup to a separate SAN volume. The network team uses their SAN snapshot software to replicate the backup files to other SAN volumes in another building. I don't have any issues with missing backup files and we don't have any availability problems with the database.
Like many others have said, I think native backups are the way to go. It doesn't cost anything, is transactionally aware, handles log backups so the space can be reused, the system tables are updated properly and I can test any backup at any time.
Edit: I'm very happy to say that getting the backups under control was my first assignment after becoming the DBA.
February 5, 2016 at 2:55 am
Ed Wagner (2/4/2016)
Like many others have said, I think native backups are the way to go.
Either native or via a tool such as Redgate SQLBackup or Dell Litespeed for SQL Server
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
February 5, 2016 at 6:20 am
Perry Whittle (2/5/2016)
Ed Wagner (2/4/2016)
Like many others have said, I think native backups are the way to go.Either native or via a tool such as Redgate SQLBackup or Dell Litespeed for SQL Server
I would agree with this. I'm just trying to get my head around the SnapManager system that's in place at this new client. The more I read about it, the less I like what I read. has anyone got any good news for me?
Thomas Rushton
blog: https://thelonedba.wordpress.com
February 5, 2016 at 6:50 am
We had snapmanager at our last gig Thomas don't you remember
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
February 5, 2016 at 6:54 am
One of the architects tried implementing Commvault Snapshot backups through out the day, this was a nightmare, as like others had said caused SQL\Oracle to go into a paused state. Got to the point one Snapshot hadn't completed whent he other wanted to start.
After many an argument managed to get them to agree to revert to SQL Native Backups and Oracle RMAN backups with 3rd party tools backing up the files. Never looked back since then, works really well using the Ola Hallegren Scripts.
February 5, 2016 at 7:32 am
Maddave (2/4/2016)
Maddave I'm curious about this issue you saw. I would expect that an entry wouldn't be written to backupset until the backup actually completed? Or was SBE "forging" a date_completed entry, even when the backup hadn't finished yet?
Or, am I misunderstanding how this table gets used when backups are being run?
I not sure what Backup Exec is doing or how the table is updated in SQL Server either. However, I tested the backup process using Symantec on a 300 GB database and when the job started, I could see the connection coming into SQL by the backup user. You would expect the backup job to take some time to complete on 300 GB database writing to a remote server, but when looking at the properties of the database a few seconds after the initial connection, the last backup date and time had been updated to the current time and this was also reflected in the msdb..backup_set table for all databases on the server too. So to SQL's eyes, the database had been backed up, near instantly. The SQL Server Log also reports the backup being taken successfully.
I've just tested this again whilst writing this on one of our Dev servers and can replicate this behaviour. The backup process starts and instantly, SQL Server is updated, but in the Backup Exec admin console I can see the job is still in a running state. I cancelled the job in Backup Exec, but SQL is still reporting in the logs that it is backed up fine. However, I couldn't restore the backup taken as it hadn't completed successfully.
I'm not sure if this is a configuration/operation issue on the Backup Exec server, but it looks fine to me. It's not good though and meant I couldn't rely on my own checks to make sure the databases had been backed up, hence going for Redgates tools.
We're at the beginning stages of letting Backup Exec handle our SQL backups. From the documentation I've looked at, I was under the impression that it just issued the SQL backup commands. I'll have to remember to test this scenario when we get our test server up. Do you remember if there were anymore gotchas with using Backup Exec for SQL backups?
February 5, 2016 at 7:53 am
LightVader (2/5/2016)
Maddave (2/4/2016)
Maddave I'm curious about this issue you saw. I would expect that an entry wouldn't be written to backupset until the backup actually completed? Or was SBE "forging" a date_completed entry, even when the backup hadn't finished yet?
Or, am I misunderstanding how this table gets used when backups are being run?
I not sure what Backup Exec is doing or how the table is updated in SQL Server either. However, I tested the backup process using Symantec on a 300 GB database and when the job started, I could see the connection coming into SQL by the backup user. You would expect the backup job to take some time to complete on 300 GB database writing to a remote server, but when looking at the properties of the database a few seconds after the initial connection, the last backup date and time had been updated to the current time and this was also reflected in the msdb..backup_set table for all databases on the server too. So to SQL's eyes, the database had been backed up, near instantly. The SQL Server Log also reports the backup being taken successfully.
I've just tested this again whilst writing this on one of our Dev servers and can replicate this behaviour. The backup process starts and instantly, SQL Server is updated, but in the Backup Exec admin console I can see the job is still in a running state. I cancelled the job in Backup Exec, but SQL is still reporting in the logs that it is backed up fine. However, I couldn't restore the backup taken as it hadn't completed successfully.
I'm not sure if this is a configuration/operation issue on the Backup Exec server, but it looks fine to me. It's not good though and meant I couldn't rely on my own checks to make sure the databases had been backed up, hence going for Redgates tools.
We're at the beginning stages of letting Backup Exec handle our SQL backups. From the documentation I've looked at, I was under the impression that it just issued the SQL backup commands. I'll have to remember to test this scenario when we get our test server up. Do you remember if there were anymore gotchas with using Backup Exec for SQL backups?
I didn't get involved with it that much once I found the issues above and we quickly moved to Redgate, but we still use it for some dev databases. I find it is very slow and difficult to set up the jobs quickly and logically. I think it is fine for file level backup, but where you need some knowledge of how sql backups work, it wasn't very intuitive or easy to operate. Symantec support is always very hit and miss too, whereas Redgate's has always been top notch.
February 5, 2016 at 8:45 am
Maddave (2/5/2016)
LightVader (2/5/2016)
Maddave (2/4/2016)
Maddave I'm curious about this issue you saw. I would expect that an entry wouldn't be written to backupset until the backup actually completed? Or was SBE "forging" a date_completed entry, even when the backup hadn't finished yet?
Or, am I misunderstanding how this table gets used when backups are being run?
I not sure what Backup Exec is doing or how the table is updated in SQL Server either. However, I tested the backup process using Symantec on a 300 GB database and when the job started, I could see the connection coming into SQL by the backup user. You would expect the backup job to take some time to complete on 300 GB database writing to a remote server, but when looking at the properties of the database a few seconds after the initial connection, the last backup date and time had been updated to the current time and this was also reflected in the msdb..backup_set table for all databases on the server too. So to SQL's eyes, the database had been backed up, near instantly. The SQL Server Log also reports the backup being taken successfully.
I've just tested this again whilst writing this on one of our Dev servers and can replicate this behaviour. The backup process starts and instantly, SQL Server is updated, but in the Backup Exec admin console I can see the job is still in a running state. I cancelled the job in Backup Exec, but SQL is still reporting in the logs that it is backed up fine. However, I couldn't restore the backup taken as it hadn't completed successfully.
I'm not sure if this is a configuration/operation issue on the Backup Exec server, but it looks fine to me. It's not good though and meant I couldn't rely on my own checks to make sure the databases had been backed up, hence going for Redgates tools.
We're at the beginning stages of letting Backup Exec handle our SQL backups. From the documentation I've looked at, I was under the impression that it just issued the SQL backup commands. I'll have to remember to test this scenario when we get our test server up. Do you remember if there were anymore gotchas with using Backup Exec for SQL backups?
I didn't get involved with it that much once I found the issues above and we quickly moved to Redgate, but we still use it for some dev databases. I find it is very slow and difficult to set up the jobs quickly and logically. I think it is fine for file level backup, but where you need some knowledge of how sql backups work, it wasn't very intuitive or easy to operate. Symantec support is always very hit and miss too, whereas Redgate's has always been top notch.
I wish I could remember the exact specifics of what Backup Exec did, but it would create a backup and the entry in the MSDB tables wouldn't tell me exactly where it was backed up to. It would give me something useless, but it wasn't a path or filename. There were also cases where it would show a database backup as having fired, but the network team didn't show it as being done in Backup Exec.
I would encourage you to take a long, hard look at Backup Exec and put it through a lot of testing before implementing and trusting it as your database backup solution.
February 5, 2016 at 8:56 am
Ed Wagner (2/5/2016)
Maddave (2/5/2016)
LightVader (2/5/2016)
Maddave (2/4/2016)
Maddave I'm curious about this issue you saw. I would expect that an entry wouldn't be written to backupset until the backup actually completed? Or was SBE "forging" a date_completed entry, even when the backup hadn't finished yet?
Or, am I misunderstanding how this table gets used when backups are being run?
I not sure what Backup Exec is doing or how the table is updated in SQL Server either. However, I tested the backup process using Symantec on a 300 GB database and when the job started, I could see the connection coming into SQL by the backup user. You would expect the backup job to take some time to complete on 300 GB database writing to a remote server, but when looking at the properties of the database a few seconds after the initial connection, the last backup date and time had been updated to the current time and this was also reflected in the msdb..backup_set table for all databases on the server too. So to SQL's eyes, the database had been backed up, near instantly. The SQL Server Log also reports the backup being taken successfully.
I've just tested this again whilst writing this on one of our Dev servers and can replicate this behaviour. The backup process starts and instantly, SQL Server is updated, but in the Backup Exec admin console I can see the job is still in a running state. I cancelled the job in Backup Exec, but SQL is still reporting in the logs that it is backed up fine. However, I couldn't restore the backup taken as it hadn't completed successfully.
I'm not sure if this is a configuration/operation issue on the Backup Exec server, but it looks fine to me. It's not good though and meant I couldn't rely on my own checks to make sure the databases had been backed up, hence going for Redgates tools.
We're at the beginning stages of letting Backup Exec handle our SQL backups. From the documentation I've looked at, I was under the impression that it just issued the SQL backup commands. I'll have to remember to test this scenario when we get our test server up. Do you remember if there were anymore gotchas with using Backup Exec for SQL backups?
I didn't get involved with it that much once I found the issues above and we quickly moved to Redgate, but we still use it for some dev databases. I find it is very slow and difficult to set up the jobs quickly and logically. I think it is fine for file level backup, but where you need some knowledge of how sql backups work, it wasn't very intuitive or easy to operate. Symantec support is always very hit and miss too, whereas Redgate's has always been top notch.
I wish I could remember the exact specifics of what Backup Exec did, but it would create a backup and the entry in the MSDB tables wouldn't tell me exactly where it was backed up to. It would give me something useless, but it wasn't a path or filename. There were also cases where it would show a database backup as having fired, but the network team didn't show it as being done in Backup Exec.
I would encourage you to take a long, hard look at Backup Exec and put it through a lot of testing before implementing and trusting it as your database backup solution.
Thank you both for your input.
February 5, 2016 at 9:24 am
Summing up what you've heard from a lot of different people (with more experience than I) about a lot of different products, I would offer this observation:
If you use SQL Server to back up your databases and logs, you will be largely sure that your backups were executed as intended (subject, of course, to the usual caveats of CHECKSUM, testing restores, etc. etc.).
If you use a third party tool, you will have to guess, test, and verify that what you thought you told the tool to do actually happened.
Now, in broad terms, test-and-verify is no different from what you should do using SQL Server directly for backups, but you'll be adding another layer of software between your data and disaster recovery.
There's a qualitative difference between database backup software vs. most other applications: it is part of a chain that stands between being in business or going out of business in a DR scenario.
Years ago, I had problems with Backup Exec and SQL Server, where backups of the database would fail for no obvious reason. I finally ditched it and returned to native backups.
If part of the job description of being a DBA is to suspect and question everything when it comes to data integrity, I would maintain that this part of your job -- safeguarding backups and testing restores -- is an area where you should be the most suspicious and questioning.
Perhaps you could offer to stand up a test bed, run db and t-log backups and whatever the sysadmins want to try, then pull the plug on the server and see who is back up and running without data loss up to the point of failure. 😉
Even if their method "works", there are other things to consider (such as Hugo's point about SAN snapshots).
Rich
February 5, 2016 at 10:07 am
A SQL Server database is not just a file, it has a very complex internal structure. I think most non-DBAs don't appreciate this. They think a file is a file and that a backup takes care of things: DBAs with any time under their belt know better.
Backup agents frequently are an add-on product for SQL Server, so you can often save money by ditching them and doing native SQL Server backups. To placate your backup admins, have the final step of the backup job copy the backups to a share that is captured by their backup process.
I lost this argument with the network admin and IT manager at a previous gig, and it sent a chill down my spine looking at the messages in my log. They subsequently decided not to fill my DBA position, instead relying on a vendor to monitor the system. When the SAN dropped a volume and it was found that the database was corrupt and backups weren't restorable, they lost 3 months of billable data. I laughed out loud when I heard this.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
Viewing 15 posts - 16 through 30 (of 34 total)
You must be logged in to reply to this topic. Login to reply