Bare Metal

  • Comments posted to this topic are about the item Bare Metal

  • We are still running a "classic" WSFC with a two-node FCI, sitting atop a rock-solid SAN, all on-prem.

    MSSQL Server 2019 running perfectly, without issues.

    Yes we have the admin overhead of updates. And this will almost certainly be the last on-prem MSSQL for us  🙁

    Andy

  • The organization I work for has a variety of deployments driven by customer requirements. When we were still working in the office retired servers were occasionally brought to our area on the floor and used for testing.

    412-977-3526 call/text

  • I have some fond memories of getting out the screwdriver to deal with the big pile of boxes from Compaq.  Was never officially my job, but the server build guys had a weeks long backlog and trusted me enough to do it.  Let my project skip the queue.

    We still have some servers running on bare metal, but not many.  The ones we have left are at our manufacturing sites, on isolated networks, and either directly control or collect data from devices that control the manufacturing process.  Those remaining will almost all go virtual in the next few years as we are setting up new VMWare clusters on those isolated networks.

  • Andy sql wrote:

    We are still running a "classic" WSFC with a two-node FCI, sitting atop a rock-solid SAN, all on-prem.

    MSSQL Server 2019 running perfectly, without issues.

    Yes we have the admin overhead of updates. And this will almost certainly be the last on-prem MSSQL for us  🙁

    Andy

    Interesting. Was this installed as 2019 or upgraded from previously?

  • TL wrote:

    I have some fond memories of getting out the screwdriver to deal with the big pile of boxes from Compaq.  Was never officially my job, but the server build guys had a weeks long backlog and trusted me enough to do it.  Let my project skip the queue.

    We still have some servers running on bare metal, but not many.  The ones we have left are at our manufacturing sites, on isolated networks, and either directly control or collect data from devices that control the manufacturing process.  Those remaining will almost all go virtual in the next few years as we are setting up new VMWare clusters on those isolated networks.

    I'd forgotten about factory/manufacturing situations. Those likely will keep some level of local hardware for years.

  • What we are doing on the process control side is setting up a local VMWare appliance to handle all the Windows and Linux servers in that realm.  HP and Dell both sell some nice "data center in a box" devices designed to use blade servers and disk arrays under a hypervisor to make that part easy.  You still have all the individual PLCs and other data collector devices, but from a DR perspective those are cheap and disposable.   Our manufacturing facilities are far too rural to rely on cloud resources for anything related to production or shipping.

  • Last Physical Hardware environment has to be close to 6-7 years ago!  Do sometimes miss the 'old' days of installing OS and settting up and configuring the boxes. Currently running in Azure on SQL Managed Instance.

  • Steve Jones - SSC Editor wrote:

    Interesting. Was this installed as 2019 or upgraded from previously?

    New physical nodes installed with Windows Server 2019, and hooked up to FiberChannel & SAN shared storage; setup MSSQL 2019 FCI; then migrated the DBs (and all the other bits and bobs) over from the old MSSQL Cluster (which was running 2008 R2 I think). Was a nice little project, unfortunately done by somebody else as I was busy on other stuff - can't win sometimes 🙁

    Andy

  • We have a physical 3 node WSFC with 6 SQL FCIs. We are using cross-site AGs for DR. We built this in 2017; running MSSQL 2016. Pretty much everything else is a VM at this point. I enjoy the hardware and internals discussion

    I get a bit jealous watching Thomas Grosher's presentations on scaling SQL server -- I think he tends to deal only with physical servers at a massive scale.

  • Andy sql wrote:

    Steve Jones - SSC Editor wrote:

    Interesting. Was this installed as 2019 or upgraded from previously?

    New physical nodes installed with Windows Server 2019, and hooked up to FiberChannel & SAN shared storage; setup MSSQL 2019 FCI; then migrated the DBs (and all the other bits and bobs) over from the old MSSQL Cluster (which was running 2008 R2 I think). Was a nice little project, unfortunately done by somebody else as I was busy on other stuff - can't win sometimes 🙁

    Andy

    You have to share some of the fun with others. Of course, you could always write about it for me 😉

  • Coffee_&_SQL wrote:

    We have a physical 3 node WSFC with 6 SQL FCIs. We are using cross-site AGs for DR. We built this in 2017; running MSSQL 2016. Pretty much everything else is a VM at this point. I enjoy the hardware and internals discussion

    I get a bit jealous watching Thomas Grosher's presentations on scaling SQL server -- I think he tends to deal only with physical servers at a massive scale.

    You should write up about managing or patching the FCIs. I'm sure a few people would like to learn.

    I love Thomas, but when we talk, he always complains about really hard things that don't work well in SQL Server. Problem that no one else but him has. Still, he's entertaining. Offer to buy him an ice cream sometime and he'll talk your ear off.

  • I've had a few in the last couple of years.

    One environment was a remote (very remote) with a high latency and unreliable connection supporting infrastructure that could not go down and no IT other than low level network technicians and an occasional engineer were ever on site. It had to be simple enough for someone with zero skills to plug in and power on, if nothing else to access the out of band management using an alternate dial up or t1 over satellite connection.

    Another was a client with a mostly small application footprint with a relatively massive SQL footprint (16 cores ETL, 64 cores web and application facing SQL) and no sysadmins, only developers. It was also in an area with bad storms. The original goal was to achieve better availability in the cloud - which they initially moved to - but within a year it was quickly discovered that any storm severe enough to knock their primary facility out of production, also meant none of the workers would be working regardless of the availability of their LOB applications. They additionally discovered that operating in the cloud meant they were taken down by internet outages a few times a year during moderate storms they otherwise would have been able through off facility generators. In the end they saved several hundred thousand dollars a year ammortized over a projected 7 years moving back on-prem then hiring a consultant about 3 times a year to maintain the hardware in addition to a few random support hours for rare sysadmin tasks the devs couldn't figure out on their own.

    Both environments were almost entirely static and very easy/predictable to support. For the remote site I would mail a DVD of service pack for proliant to the site to have someone put in the optical drive to do firmware and driver updates. Everything in both environments were locally HA with DR and had procedures in place to quickly rebuild or replace a failed node.

    I nearly had a new physical SQL deployment launch that the licensing and disk i/o was going to be cheaper for, but requirements were changed making it no longer necessary.

     

  • I think those local environments are tough, though I'd made sure they were VMs. Helps with replacing hardware and DR. I think this is one reason Azure and AWS moved to support hybrid/offline. They need manufacturing, rural, cruise ships, etc. places where you can't be reliably connected all the time.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply