Why Don't You Upgrade in Place?

  • Comments posted to this topic are about the item Why Don't You Upgrade in Place?

  • Thanks Andy! Good timing on your part to write this now since a new year (with new budgets) starts soon. I imagine upgrading to SQL Server 2022 (or 2019) will be on lots of folks' plates in January (I was having this upgrade-in-place vs. restore to new server discussion just a few days ago).

  • If you're doing an in place upgrade you're still using old hardware - another reason not to choose for this option

    Wilfred
    The best things in life are the simple things

    • I'm still stuck with a couple of SQL2005 instances because some softwares prevent upgrading the database end of the system because they use sql server systems stuff that is nolonger available.

      ( in dispite of all the warning not to use undocumented stuff, right )

    • In many cases, software upgrades come with hardware upgrades and vica versa.
    • Only the last few years operating systems are evolving to facilitate upgrades, without an in-place upgrade for HA systems. Mainly clustered installations / availability groups / mirrored db.
    • And most important: YOU NEED APPLICATION USERS TO TEST THIS STUFF BEFORE ( acceptance system ) you actually do this on a production environment. Most devs have no clue what part is actually being used of the applications they build and how those parts are being used.
    • All buzwords and theories available do not guarantee an actual smooth upgrade!

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • I prefer migrations but have used in-place upgrades to buy time until there are the resources to look at a more managed upgrade. The trick is to have lots of backups/snapshots and to tell the users that the in-place upgrade only has an 80% chance of working. If it fails just revert the VMs and no one is any worse off than they were before.

    Wilfred van Dijk wrote:

    If you're doing an in place upgrade you're still using old hardware - another reason not to choose for this option

    Most people now use VMs so the hardware is likely to be changed periodically.

    Johan Bijnens wrote:

    I'm still stuck with a couple of SQL2005 instances because some softwares prevent upgrading the database end of the system because they use sql server systems stuff that is nolonger available. ( in dispite of all the warning not to use undocumented stuff, right )

    You might want to look at something like:

    https://www.galliumdata.com/

  • Not my call

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution 😀

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • following

  • We typically don't upgrade SQL Server in place because we take the opportunity to also upgrade to a new Windows server version as well.

    • This reply was modified 10 months, 2 weeks ago by  daniel bliss.
  • I often also explain that a migration-based update leaves you with a more guaranteed rollback.  Snapshot failures/corruption are rare but I have seen them happen.  With a migration-based upgrade if something is off during the pre-release phase, it's easy to just roll back to the server and hardware that is sitting there, unchanged, since you started the maintenance window.

    Especially in the era of virtualized hardware, it's typically pretty easy to spin up a new environment (unless it's very large) and the decommission the old environment once the new one is proven, especially if you can scale down the old server's resources and scale up the new one's.

    There's a reason we DBAs cringe at in-pace upgrades, and it's because we don't like risk.  We don't like unpredictability.  And the more of those file system adjustments, registry changes, etc pile up over time, the more unpredictable and risky things become.  It's also why we don't want other software installed on the DB servers.  Just a nice clean Windows install, some AV if necessary, and the SQL Binaries.  Anything else adds risk and complications.

     

  • Agreed on all counts.

  • Who says we don't upgrade in place?

    My product line was recently required to perform an upgrade (Standard Edition, SQL 2016 to SQL 2019) of every SQL instance, production and non-production, hosted on Azure VMs,  so that we could use TDE to meet a regulatory requirement.   We performed in-place SQL upgrades on 10 two-node clusters plus 5 ancillary instances without a single hiccup.  Granted, we thoroughly tested not just the impacts of the transition from SQL 2016 to SQL 2019 itself, we thoroughly tested the in-place upgrade process itself, so we were confident that we had accounted for any gotchas likely to occur--and happily, none did.

    I'll admit, I was apprehensive when my boss said that we were going to take the upgrade-in-place route, primarily because I'd absorbed the conventional wisdom that upgrade-in-place is just too risky for the reward.  However, in our case, the reward was huge:  This product hosts an individual database per customer, so the upgrade-to-new-instances approach would've involved standing up 25 extra Azure VMs, equipping all the server-specific infrastructure required for each one (e.g. Azure blob storage for SQL backups),  migrating 11,0000+ databases onto these new instances, which would then require rebuilding all the individualized infrastructure associated with each database--all told, months of extra work, which we avoided by upgrading in place.

    Is upgrade-in-place the right approach for everyone? Certainly not!  Is it the right approach under some circumstances? Clearly it is, and so it shouldn't be dismissed out of hand, as I very nearly did.  Kudos to my boss, Chris, for the foresight and courage to take our team down the path less traveled!

  • The timing on this article is spot-on, as I just (mostly) wrapped up a migration of my production instances to SQL 2019.  I fall into the "never do an in-place, even if it IS less work," only because I far prefer having the easy-button of getting the various apps pointed back to the old server if things go sideways.

    It's also a LOT easier to test and ensure nothing being done in the code is going to break horribly on a new version of SQL.

    One other benefit of a migration vs an in-place upgrade is, you can spread out the migration and with some reasonable planning of backups, minimize the downtime for the end-users.

  • Wanted to throw my hat into the ring on this - I have done both. In-place upgrades and migration upgrades. My biggest factor in deciding the method is "how many systems will be impacted by the change?". If it is 1-2, then it is a migration upgrade for sure because changing and testing the apps is easy. If it is 100's, some that are in use and have no assigned support team (scary situation, but we have teams that CAN pick up the app and fix bugs, but the app is fairly mature and hasn't had any bug fixes since 2019 and is internal only), then in place is a safer option. Trying to re-configure 100's of apps, some with hard-coded database connection strings, is a VERY time consuming process and hard to get managers to sign off on it when upgrade time rolls around. While doing spot checks on apps is a much more simple process and much lower risk and if things go crazy with the new version, we have options we can try like the legacy cardinality estimator or dropping the compatibility level.

    Now, a fun scenario I ran into recently was when I went to upgrade one of my systems with an in-place upgrade. I had done our test and dev environments with no issues at all. It was SOOO smooth that I gave an optimistic downtime window when requesting it... then production went sideways. Upgrade was "successful" according to the installer, but the instance failed to start up. After reviewing the logs and doing some troubleshooting, determined that master somehow got corrupted in the upgrade. Fix that, recreate logins, map users to logins, and we were back in business. THANKFULLY the business has downtime on weekends and I ALWAYS schedule things like this on weekends in case of odd issues like this.

    So, in the above scenario, a migration install would have been safer, but as a skilled DBA I was able to correct the issue before next business day and nobody knew I had issues except the DBA team and my supervisor.

    My favorite part about migration installs is that you have a built-in rollback plan - if the upgrade fails for any reason, just don't update the connection strings and everything will keep running as is. Downside is when you have a TON of things that need new connection strings it can be painful to do a migration install. And if you miss just 1 application when you do that, it could mean that the business is stopped. Missing an app isn't the fault of the DBA, but the blame always falls on the DBA as they are the "Default Blame Acceptor". Now, you CAN use SQL Aliases to handle the connection strings and then IT can push those out through a GPO so upgrade Saturday, force push the GPO to all machines on Sunday, and Monday everything SHOULD work... but all it takes is one developer to not use the SQL Alias OR one end user to have a network hiccup when the GPO is pushed out and things break. In place upgrades, when they go smoothly, are USUALLY transparent to the end users.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Hey Andy!!! I dusted off the old account to post here 🙂

    I love the pragmatic thoughts you bring. GREAT thoughts and great commentary as always! Indeed, cost is a factor to consider. And the process is largely smooth and pain free. To me the biggest benefits of NOT doing an in-place upgrade are:

    1. Risk mitigation - for many of the risks you highlighted.
    2. Ease of rollback - you destress migration night/weekend - and you start fresh, test and change some connection strings or CNAMEs and then ship it - or change back and revert to the old if you haven't started taking data yet.
    3. At my SQL Server consultancy - our team has billed at least a couple of our clients a LOT (and for one - I mean A LOT) of hours for in place upgrades gone bad. They just work 98% of the time - but when they fail - they reallllly fail - and depending on particulars rollback from that can be more expensive than needed - not to mention the downtime.

    I love planning and building new when able. I love starting  fresh with best practices. And I love all of the work and time and sweat spent during daylight hours. A properly planned and executed migration style upgrade should not add a dramatic amount of time if done right. We do a LOT of upgrades so our process and scripts are pretty solid. It's a different equation, perhaps, when it's a DBA at a company for their entire career who may get to do 2-3 upgrades total. But there are a lot of great resources out there and videos about planning and executing. Right here on SSC, all over the web - and I have a series of videos to help folks plan.

    I'm still in the knee jerk default camp of "please, no in place upgrades."

    https://straightpathsql.com/archives/2020/04/sql-server-in-place-upgrades-too-expensive/

    https://dba.stackexchange.com/questions/29328/are-sql-server-in-place-upgrades-as-ill-advised-as-they-used-to-be

    And I think my post on "gambling" hints at my rationale. I choose to gamble on safety when I can - and if it's only a percentage more hours to do that (and often not many) - I think it's a bet I'd always advise a client to take.

    https://straightpathsql.com/archives/2021/03/a-good-sql-server-dba-makes-lousy-bets/

    __________________________________________________

    Mike Walsh
    SQL Server DBA
    Blog - www.straightpathsql.com/blog |Twitter

  • I've been a DBA over 20 years.  Because of experience, our team does not support SQL Mail, SQL Server Maintenace Plans, linked servers, and upgrades in place.  We do have exceptions.

    I think migrations should be easy.  A migration can be difficult to manage if there was a failure of management to provide support for an application or a failure to allocate resources for the migration in advance.  We support a lot of stuff being a county with too little staff.  When somebody retires, some odd little app - or even a big one like BizTalk - can get lost.  "Anybody remember what this is and who supports it?"  Then, "No, don't ask upper management yet about it unless you want to be responsible for it."  (BTW, because of inexperience, I got BizTalk... for a while.  I call this management style "Pin the tail on the donkey".  In a reorg, it's more likely "Musical Chairs."  Humor is a job requirement.)

    Every couple of years we have an old system that needs an upgrade without developer resources.  Despite years (e.g., we have one production install exe created in 1999 requiring SQL 7.5 compatibility) of requests for a migration upgrade, nothing happens - staff reductions and reorgs do not help.  (I used to include a link to the song "In the year 2525" for the 1999 requests.  The application is being upgraded now.)  Perhaps if the system was just old instead of ancient, an upgrade in place might work.  The version dependencies can make an upgrade a nightmare, requiring several incremental upgrades to get to the versions required.

    Anyway, every few years we do try an upgrade in place before doing a proper migration.  (We do have an expert server team.  A brave soul now and then is willing to try.)  A few days or a week later the result is the same - failure.  (True, I might only hear of failures because then I have to do a migration.  I did ask to be informed of any successes.)  Unless an upgrade in place is something tested and done often, plan to fail if you wait too long.  In our case, waiting too long is expected is a prerequisite for the upgrade in place request.  We shot ourselves in the foot, so let's shoot ourselves in the foot again.

    RandyHelpdesk: Perhaps Im not the only one that does not know what you are doing. 😉

Viewing 15 posts - 1 through 15 (of 20 total)

You must be logged in to reply to this topic. Login to reply