Things I've Learned About the Cloud

  • Comments posted to this topic are about the item Things I've Learned About the Cloud

  • I think the cloud will keep you in editorials until you decide to stop.

    What I've learned is that the tech debt a company carries can block a lift and shift.

    Taking the example of a huge on-premises DB server with several databases on it.  Those databases have convoluted dependencies, not just in terms of the applications they support but also dependencies between the databases.  This means that you cannot migrate databases one by one, you have to test the water with both feet!  Never a good thing.

    How big is your DB server?  Is there an equivalent size RDS instance in the cloud?  If not then you face the challenge of installing a DB server in the cloud and being responsible for everything but the infrastructure.  That is certainly possible but it does throw away an awful lot of the advantages that moving to the cloud brings.

    Lets suppose that a direct life-and-shift is  possible.  I would advise that this be a short lived 1st step with a quick pause for breath before looking at migrating towards cloud based services.

     

     

     

  • Exceptionally good article, Steve. Where I work, we never successfully got a database into the cloud, that I'm aware of. Not due to a lack of trying. In 2018 or 2019 I migrated one of our databases into an Azure SQL Database, with the help of resources I found here on SSC. The database I migrated to Azure SQL was a small database, so the migration went fast. But I didn't think it was possible to use it using the old technology (Microsoft Access 2007) as a front-end. I wish I'd tried. I eventually took it out of Azure SQL, untested with MS Access.

    To me, the most significant thing you said in this piece is in the sentence, "Tackling the cloud require a staff that wants to grow and learn with the platform, as well as one that knows when to use the resources they know best and don't add complexity or novelty just because someone wants to try something new."

    Rod

  • Azure SQL is just SQL Server, a few more things, a few less, but it's really the same. I could move SQL Server Central there easily, as we don't use much more than core RDBMS stuff. Same drivers, different URl for the db.

    It does require some growing and learning, and some confidence that most of what you know still apply. Just not all.

  • Rod, are you able to give some details as to what went wrong and why the switch to cloud was unsuccessful?  It would be useful to compare notes.

    In AWS there are certain things that you just can't do.  Some of the server roles you don't have are

    • bulkadmin
    • dbcreator
    • diskadmin
    • securityadmin
    • serveradmin
    • sysadmin

    Some of those feel very awkward from the DBA perspective.  When you have a shared responsibility model it becomes obvious why those roles are denied.  Anything that can be used to threaten the underlying stability of the service is going to be locked down.

    There are a whole load of other restrictions too.  It's going to be the same in the other cloud vendors.

    There are a lot of DBA headaches that simply go away.  There is also a subtle shifting in the power dynamic in DB development.  Some of the things a DBA does to save people from themselves are reduced so people have to accept that they are responsible for their own car crashes.

     

     

  • David.Poole wrote:

    I think the cloud will keep you in editorials until you decide to stop.

    What I've learned is that the tech debt a company carries can block a lift and shift.

    Taking the example of a huge on-premises DB server with several databases on it.  Those databases have convoluted dependencies, not just in terms of the applications they support but also dependencies between the databases.  This means that you cannot migrate databases one by one, you have to test the water with both feet!  Never a good thing.

    How big is your DB server?  Is there an equivalent size RDS instance in the cloud?  If not then you face the challenge of installing a DB server in the cloud and being responsible for everything but the infrastructure.  That is certainly possible but it does throw away an awful lot of the advantages that moving to the cloud brings.

    Lets suppose that a direct life-and-shift is  possible.  I would advise that this be a short lived 1st step with a quick pause for breath before looking at migrating towards cloud based services.

    I think you're right about content, as the cloud keeps changing and growing, so there are always things that are exciting or scary, or do/don't work well.

    I think some of your thoughts are OK, some not. lift-and-shift can work as there are options in both Azure and AWS for a full instance that lets you move. Lift-and-shift isn't necessarily the best solution over time, but it is a valid step.

  • I stepped into a position 5 years ago as a company's first DBA. They mostly had projects that had been developed cloud-first using Azure SQL Database, with a few Azure VMs housing SQL Server instances for legacy systems. I have been tackling a seemingly never-ending trickle of performance issues due to hidden or not-very-well-documented limitations and bottlenecks. For just a few examples, unexpectedly low limits on sessions in Azure SQL Database, unexpectedly low limits on write throughput in Azure SQL Database, and unexpected limitations on the memory usage for all Extended Events Sessions within an Elastic Pool.

    IO limitations on different sizes of disks and on VMs of different types and sizes overall and how they interact are also more complex than they initially appear, but that's true of SANs and network interfaces to them on premises as well.

    Once you lift and shift, or even go to production with a cloud-first system, expect there to be a lot of tweaking to configuration and scaling that you didn't expect, no matter how much proof of concept testing you may have done. And those are almost always in the direction of increasing the cost. "No plan of operations extends with any certainty beyond the first encounter with the main enemy forces." - Helmuth von Moltke, 1871. Replace "enemy forces" with "production workloads in the cloud" and it still holds true.

    That doesn't make going to the cloud a bad decision, but you definitely need to pad your cost estimates at budgeting time. If we had to run all our systems on premises, there would need to be at least 5 of me, and that would also have a significant cost.

  • David.Poole wrote:

    Rod, are you able to give some details as to what went wrong and why the switch to cloud was unsuccessful?  It would be useful to compare notes.

    In AWS there are certain things that you just can't do.  Some of the server roles you don't have are

    • bulkadmin
    • dbcreator
    • diskadmin
    • securityadmin
    • serveradmin
    • sysadmin

    Some of those feel very awkward from the DBA perspective.  When you have a shared responsibility model it becomes obvious why those roles are denied.  Anything that can be used to threaten the underlying stability of the service is going to be locked down.

    There are a whole load of other restrictions too.  It's going to be the same in the other cloud vendors.

    There are a lot of DBA headaches that simply go away.  There is also a subtle shifting in the power dynamic in DB development.  Some of the things a DBA does to save people from themselves are reduced so people have to accept that they are responsible for their own car crashes.

    David,

    I'll try to answer the questions you've asked. (Although I took lots of notes during that time, I don't have access to those notes at the moment.) I work in state government for the health department, so when COVID-19 hit, it was a major disruption for us, as it was for most of the world. The order to vacate offices and work from home (WFH) meant no one could work in the office. (Obviously, exceptions had to be made, but they were very rare.) Add to that the fact we had to get web applications up and running to schedule people for COVID testing, results of tests, etc., all sorts of things related to COVID. The simple fact is we didn't have anything like what was required, and we couldn't work in the office. (As an aside, I LOVE WFH, as it has saved me at least 5 hours commuting daily; if I were lucky, which I often wasn't. And I don't need to be in the office.) Going to the cloud was the best choice. But we had no experience at it.

    So, that resulted in lots of meetings with Microsoft (we decided to use Azure) in an effort to learn how to get applications into the cloud. Most of the developers (I'm one of them), all the DBAs, management from the CIO downwards, were all present at meetings with Microsoft, to learn how to get SQL databases into Azure, web applications up, etc. All applications had to be written new. And because nothing like this had ever been done, there was no SQL database we could migrate to Azure SQL Database, or even do a lift and shift. Lifting and shifting were not an option, for anything. There were heady days, as I saw the prospect of getting experience of current technologies, rather than the old stuff typically in use where I work. (By old, I mean 15 years old and older.).

    Here's where things get weird. As far as designing a new database goes, I'm sure it could be done in SSMS or Azure Data Services by any of the DBAs. That wasn't discussed. Perhaps they did that outside of the meetings. That didn't happen in the meetings, so that's what I suspect happened. However, what we spent a LOT of time on was configuring networking and security, at both ends. Whole days would go by where our security people would be adjusting rules in equipment, and Microsoft people would make similar adjustments in Azure. This is where I cannot tell you what was going on, because security and networking aren't in my wheelhouse. I remember thinking to myself, why did we spend so much time at this? One networking/security problem would come up after another. Days would turn into weeks, with what to my eyes appeared to be little to no real improvement in networking and security. We did get a portion of an application written and tested creating an appointment for a COVID test.

    But then, suddenly, POOF! Just like how retailers suddenly drop everything related to a major holiday like Christmas after its passed, it all just stopped. It was an extremely jarring experience. The applications all run on-prem now.

    For reasons I don't understand, no one here does a retrospective on anything. At least, not from my point of view. Perhaps managers somewhere discuss why something succeed or fail, but if so, they never involved developers, DBAs, nor other IT people. So, from my point of view this is what I identify as what killed the move to Azure:

    Whatever the DBAs did and how they proceeded, is hidden from me. And we've had so much turnover here that all the DBAs who were there in 2020, are now gone.

    Networking and security, both of which are opaque to me, I feel had the biggest involvement with causing the failure to get into the cloud.

    And lastly, I feel developers must shoulder some of the responsibility. The overwhelming majority of developers here are adamant about not adopting anything new. For example, all new applications here started by using .NET Framework 4.5.2, which went out of support many months ago. I warned about .NET 4.5.2 going out of support back in 2018, but my fellow developers ignored me. I remember once during those meetings with Microsoft, when they were trying desperately to get the ASP.NET MVC project shoved into Azure that was built using .NET Framework 4.5.2. It took days for them to admit to Microsoft that they wrote the app with .NET Framework 4.5.2, then the Microsoft personnel told them that Azure doesn't support that framework. There was stunned silence from the developers.

    So, I identified two causes to our failure to get an app into the cloud and perhaps a third if the DBAs had caused something, but I wasn't aware of what may have gone wrong there. And I could have missed other causes.

    • This reply was modified 2 years, 1 month ago by  Doctor Who 2.

    Rod

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply