SQL High Availabilty

  • All,

    Just looking for peoples advice based on experience with sql high availability.

    Basically in the past we have used SQL Clustering without any issues for high availability, I have used this in many companies how ever we are currently moving environements which I will not have the option to use clustering.

    So basic setup, two sql servers (active/passive roles) running SQL 2008 standard, database files are stored on a high speed SAN.

    So need some options, I know I can use log shipping, however failure time is >1min. Also for patching how do I restart a server and ensure the other server which is being logged shipped to has the latest data ?

    Mirroring is a option, however according to MS mirroring should not be used on more than 10 databases. We have lots more!

    The last option which seems bad practice but could work is simply kill all users, detach db's and then reattach to other server. Any issues with this ?

    Ignore any issue with client connectivity based on server names, this can be resolved in different ways.

    Any real advice appreciated, I have read most of the SQL documentation so no need to just forward me to there 🙂

    Thanks

    Shane

  • Hi Shane,

    Yes you have evaluated almost all the possibilities.

    Here Cluster is an very ideal solution for your scenario.

    And yes you can do detach and reattach the database but looks not as simple as you think.

    Is it possible to detach system databases? what will happen if any udpate to system database?

    Surely you can up the user database by detach/attach activity but think about the system databases.

    This is the only doubt I do have in mind.

    Max you can save 10 db by mirroring 🙂

    HTH

    ---------------------------------------------------
    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

  • - You did a good assessment of your options.

    - As you know, clustering is mainly to overcome hardware failures.

    The big advantage is that it is addressable using the same connection info !

    Any other solution, would mean at least modifyin the connection string !

    (mirroring .. add the mirrored server)

    (other solutions, modify the connection string at DRP time !)

    - also consider the overhead (live) replication solutions will have on your system ! (space , I/O , cpu)

    - How about using HyperV ? (so you can re-launch it at another HyperV host)

    I've read a good article on it in SQLServerMagazine (July2009) with hardware performance guidelines.

    edited/added this:

    Keep in mind with sql2008 your replication options are enhanced (compared to 2000) !

    http://msdn.microsoft.com/en-us/library/cc645993.aspx states Merge replication and Transactional replication are also available with Standard edition ! (so not only log shipping)

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Although you may have seen a recommendation of <10 databases for mirroring, it isn't really accurate. More important is the availability of resources to handle the mirroring, especially the network bandwidth/latency and the mirror server specs - the log entries need to be pushed across the network and processed in a timely manner or every transaction on the principal will be affected. SQL2008 presumably can handle more than SQL2005 due to log compression.

  • As your database files are on a SAN the option of SAN based replication may be available to you.

    ---------------------------------------------------------------------

  • The last option which seems bad practice but could work is simply kill all users, detach db's and then reattach to other server. Any issues with this ?

    going to be ddifficult to detach the databases and copy files if the server is blown up, drowned, short circuited or otherwise down.

    🙂

    ---------------------------------------------------------------------

  • If

    'going to be ddifficult to detach the databases and copy files if the server is blown up, drowned, short circuited or otherwise down'

    that happpens I think I have more issues then a few seconds of down time, also if the server is down then I wont need to detached as they already will be!! Databases are on a SAN remember.

    Also from my experience the issues above are very very rare!!!! Not saying they dont happend, but normally good servers with hardware monitoring and good datacentres any major issues are detected well in advance. Redundant hardware also helps.

    So unfortunately no real new options!

    Shane

  • database files not cleanly detached might not attach properly as the files might not be consistent.

    I certainly would not feel comfortable relying on this as a DR method.

    ---------------------------------------------------------------------

  • Indeed, but the uncleanly "detached" files will only be a part of the problems.

    Everything performed a instance level , logins, jobs, end points, instance level triggers, ... will have to be re-done at the "pasive" instance !

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Yeah,

    OK so back to thelog shipping question, with the following 2 scenarios.

    Unplanned failure - What happens when you bring the failed server back up ? What happens to any new transactions which were on the 'server receiving log shipments' ?

    Planned failure - How do we shut down the main machine which is shipping the logs ensuring that the other machine has all the transactions in it ? E.g. I assume shutdown EAXACTLY after a 'log ship' has been done.

    Or am I thinking about this wrong ?

    Shane

  • Shane Osborne (7/19/2009)


    Yeah,

    OK so back to thelog shipping question, with the following 2 scenarios.

    Unplanned failure - What happens when you bring the failed server back up ? What happens to any new transactions which were on the 'server receiving log shipments' ?

    If the latest transaction log has not been copied over you will lose those transactions. You will also lose those transactions that occurred after the last tran log backup. So you need to decide how much data you can afford to lose and set your schedules accordingly, of course a minute is the smallest time gap you can acheive. If you can still get to the transaction logs on the failed server you would perform a backup of the tail of the log , copy it over and restore, then no data is lost.

    When you bring the failed server back on-line you would have to either start log shipping in the other direction or restore the primary from a full backup of the secondary and start logshipping again. This is where logshipping bercomes manual and time consuming , you have to reinitialise it frrom a full backup which can take a while

    Planned failure - How do we shut down the main machine which is shipping the logs ensuring that the other machine has all the transactions in it ? E.g. I assume shutdown EAXACTLY after a 'log ship' has been done.

    you have to stop all connections to the primary first. Then take a final tran log backup, copy it over and restore it. Then you have all transactions.

    Once the secondary has been bought online, you will have to reinitialise log shiping.

    you could possibly consider mirroring the databases that cannot tolerate data loss, log shipping others and perhaps nightly full backup copies will be ok for some?

    ---------------------------------------------------------------------

  • Sounds like I will go back to mirroring then, or may just setup as a cluster as I always have done as this seems the easiest option from a server failure / update/restart (not db corruption method) point of view. Just wanted to save some money on windows licensing

    Even with mirroring still have the same two scenarios. However mirroring has a lot more options and I can go down the 'high availaibility' and witness server route.

  • I dunno... SAN snapshots work really well.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hello Shane,

    Refer the following link for performing the failover.

    ms-help://MS.SQLCC.v10/MS.SQLSVR.v10.en/s10de_4deptrbl/html/2d7cc40a-47e8-4419-9b2b-7c69f700e806.htm

    ms-help://MS.SQLCC.v10/MS.SQLSVR.v10.en/s10de_4deptrbl/html/edfe5d59-4287-49c1-96c9-dd56212027bc.htm

    HTH

    ---------------------------------------------------
    "Thare are only 10 types of people in the world:
    Those who understand binary, and those who don't."

  • Controlled failover in log shipping seems a lot of hassle, definitely prefer clustering!!

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply