Hyperbac

  • ok, final results.

    I decided the only real test was to back up ALL my databases and see if it went through in the available window, and compare to the native backup runs. This is 305 databases and 1 terrabyte worth of data (db sizes from a few MB to 185GB).

    Upshot of this was the hyperbac overall run time was pretty much identical to the native backup. You could say it was better as in the hyperbac run databases were backed up sequentially, in the native run I do it in two parallel streams. Compression acheived was 84%.

    some of the individual databases backed up a lot slower, others backed up quicker and it all evened out over the whole run. There can be significant differences in backup times for the same database which does concerns me, but the ease of use and compression acheived make the tool worth having. Its easy enough to do an ad-hoc native backup if it comes to it.

    I have not hit any performance problems on restores (which is probably more important), and the only time a backup hung indefinitely was when trying to overwrite a non-compressed native .bak file with a compressed hyperbac backup. Turns out you cannot do that (I mistakenly though the init clause would get round that for me).

    Versions tested were 4.1.0.0 and 4.1.5.0, behaviour was the same.

    ---------------------------------------------------------------------

  • Seems to me that if you did native backups in parallel (2 streams) and hyperbac sequentially and they took the same time to complete then hyperbac is twice as fast. Am I missing something with that math? :hehe:

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • its probably not that simple, would have to run hyperbac in two streams to be sure, The comparisin is adding together the overall run times of the native backups and comparing to the overall run time of the hyperbac run.

    Hyperbac run went to a new single drive added for the purpose, native run each stream goes to a different drive.

    Its not apples to apples but I was able to compare elapsed time at an individual database level and performance was good enough, important thing was doing it the slowest way (sequentially, writing to same drive) it went through in time, and I know I can improve by backing up one more than one database at a time and using striping if I have to.

    ---------------------------------------------------------------------

  • Not sure if I'm a little late, and i ,must say that i only skimmed the post so if what I say has already been noted i apologize for being redundant...

    Hyperbac doesn't automatically multi-thread backups, you have to manually multi-thread by stripping the backup. Other backup tools multi-thread behind the scenes. We migrated from LiteSpeed to Hyperbac a little while back and couldn't be happier. During our evaluation we initially saw an increase in run times along with a sizable decrease in file size. When we asked Hyperbac, they made the same recommendations, stripe and explicitly configure maxtransfersize. Once we striped the backups our run times came in way below LiteSpeed's and we still benefited from improved compression. We also love the ability to use native code.

    take care...dLP

  • Dan Provan (1/5/2010)


    Not sure if I'm a little late, and i ,must say that i only skimmed the post so if what I say has already been noted i apologize for being redundant...

    Hyperbac doesn't automatically multi-thread backups, you have to manually multi-thread by stripping the backup. Other backup tools multi-thread behind the scenes. We migrated from LiteSpeed to Hyperbac a little while back and couldn't be happier. During our evaluation we initially saw an increase in run times along with a sizable decrease in file size. When we asked Hyperbac, they made the same recommendations, stripe and explicitly configure maxtransfersize. Once we striped the backups our run times came in way below LiteSpeed's and we still benefited from improved compression. We also love the ability to use native code.

    take care...dLP

    Well, you are just 4 months and change 'late' so not too bad. 😀

    You mentioned some good things about Hyperbac. Add to that the fact that it isn't XP based and thus doesn't cause problems with MemToLeave.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • just to finish this off I did purchase licenses for 3 servers in the end and all is working well.

    I don't see the consistent improvement in elapsed times that others seem to, but it turns out its only noticeable on the server I chose to do large scale tests on! On the other two it is better overall by say 10% and the other it works out the same, as it evens out across all databases. I'm not striping yet as there is not a pressing need.

    Just shows that if you are looking specifically to improve run times, test before you buy, but if you are looking for good compression, ease of use and minimal changes to current backup strategy I can recommend hyperbac.

    One place it is great is restores across the network, thats 6 times faster (in line with compression ratios). Thats good in itself but also greatly reduces chances of failures due to network glitches.

    ---------------------------------------------------------------------

  • Stripe your backups and you'll benefits from decreased run times...I mentioned it earlier that other backup tools do this behind the scenes, but with Hyperbac you need to explicitly multi-thread your backups by defining multiple files in the native backup command.

    Here's is an example of some code i use when i need to make a copy of a database:

    BACKUP DATABASE Test

    TO DISK = N'\\ServerHere\g$\MSSQL.1\MSSQL\Backup\Copy\ServerHere.Test_Full_20100106_1.hbc',

    DISK = N'\\ServerHere\g$\MSSQL.1\MSSQL\Backup\Copy\ServerHere.Test_Full_20100106_2.hbc',

    DISK = N'\\ServerHere\g$\MSSQL.1\MSSQL\Backup\Copy\ServerHere.Test_Full_20100106_3.hbc'

    WITH NAME = N'ServerHere.Test_Full_20100106',

    COPY_ONLY,

    INIT,

    STATS = 2

    Note that i'm using the 'copy_only' flag so that the one off backups i take using this code don't interfere with our recovery model. I also include the 'stats' flag so that i can gauge when the backup will be done...neither of these are needed if you're going to stripe backups that are part of you're backup strategy.

  • TheSQLGuru (1/5/2010)


    Dan Provan (1/5/2010)


    Not sure if I'm a little late, and i ,must say that i only skimmed the post so if what I say has already been noted i apologize for being redundant...

    Hyperbac doesn't automatically multi-thread backups, you have to manually multi-thread by stripping the backup. Other backup tools multi-thread behind the scenes. We migrated from LiteSpeed to Hyperbac a little while back and couldn't be happier. During our evaluation we initially saw an increase in run times along with a sizable decrease in file size. When we asked Hyperbac, they made the same recommendations, stripe and explicitly configure maxtransfersize. Once we striped the backups our run times came in way below LiteSpeed's and we still benefited from improved compression. We also love the ability to use native code.

    take care...dLP

    Well, you are just 4 months and change 'late' so not too bad. 😀

    You mentioned some good things about Hyperbac. Add to that the fact that it isn't XP based and thus doesn't cause problems with MemToLeave.

    I realize the post is a bit stale - but a related question. Quest now offers the Litespeed Engine separately from Litespeed. It can do compression of backups similar to Hyperbac (e.g., using existing SQL scripting, it intercepts defined file extensions and compresses them); I am checking with one of their salespeople to see if it also uses MemToLeave. I am guessing it does, however I don't see any XPs installed with, which makes me wonder.

    Does anyone have any specific knowledge of this?

    Thanks in advance.

    --Jed

  • using existing SQL scripting, it intercepts defined file extensions and compresses them

    If it does that, it's probably using a callback filter e.g. here[/url]. In that case, it's a process that runs outside of the SQL Server process space, and wouldn't need to use any of SQL Server's memory space, which means no impact on the MemToLeave region.

    SQL BAK Explorer - read SQL Server backup file details without SQL Server.
    Supports backup files created with SQL Server 2005 up to SQL Server 2017.

  • Ray Mond (9/17/2010)


    using existing SQL scripting, it intercepts defined file extensions and compresses them

    If it does that, it's probably using a callback filter e.g. here[/url]. In that case, it's a process that runs outside of the SQL Server process space, and wouldn't need to use any of SQL Server's memory space, which means no impact on the MemToLeave region.

    Thank you sir.

    I did get a response from Quest Product Management via a salesman (emphasis mine): "The LiteSpeed Engine does not integrate with SQL Server, so there is no effect on MemToLeave. Since we run as a system level driver, the CPU shows up under the process performing the activity, namely SQL Server. "

    Now another question is, how does one measure memory used by such a filter driver while it is running. There is one article on Stack Overflow here which seems like a good start. I also asked Quest where to look for it and maybe they can provide some direction.

    Related, based on a quick-n-dirty test (32-bit, Win2003), Hyperbac's service seemed to take up about ~250MB* of RAM while I was running a backup & verify. (*just using Task Manager, which I realize is not always the most accurate tool however it gives a ballpark, sufficient for initial testing)

  • Now another question is, how does one measure memory used by such a filter driver while it is running

    Sorry, I don't know how. Interesting though, that Litespeed Engine seems to implement the compression and encryption stuff in the driver itself, rather than call out to another process, like what Hyperbac does.

    Related, based on a quick-n-dirty test (32-bit, Win2003), Hyperbac's service seemed to take up about ~250MB* of RAM while I was running a backup & verify.

    Yes, that's about the amount of memory used on some quick tests I performed. It maxed out at around 220 MB regardless of the amount of backups that it was running and the number of backup files that were being created.

    SQL BAK Explorer - read SQL Server backup file details without SQL Server.
    Supports backup files created with SQL Server 2005 up to SQL Server 2017.

Viewing 11 posts - 46 through 55 (of 55 total)

You must be logged in to reply to this topic. Login to reply