September 9, 2011 at 10:18 pm
Anyone with any experiences moving from a physical installation to virtualized?
We're re-writing a legacy db and app, and our vendor seems to think that we can replace a 24 core dedicated HW machine with a 4 VCPU install on blades/VMWare. We've gone with this setup for our dev environment now... The db schema will be cleaned up and definitely shows better normalization already, but all indications are that the load we ran on our 24-core beast will likely be the norm during our high utilization periods on the 4-core VCPU setup. Running one of our regular jobs pins the CPUs to 85%-90% utilization in DEV with no other users on the system, and this is a job we need to run with 200+ simultaneous users hitting the system, with a possible peak of 500 out of a user base of 2000. I'm of the opinion that it's a non-starter, but looking to get some opinions - can virtualizing the DB enable us to cut our CPU power by a factor of 6?
Disk IO IS actually better on the virtualized environment, but that's likely due to newer hardware, and I can't see it making up for the cut in CPU power.
September 10, 2011 at 2:22 am
can virtualizing the DB enable us to cut our CPU power by a factor of 6?
No and yes.
Virtualization cannot cut the absolute CPU power an application requires. But it can cut the relative CPU power. In your case: if the absolut CPU power provided by the virtual host is -as an example- 8x to what you have right now, then the migration will allow to cut the CPU power. The number of cores is not enough of a figure to answer that question... But that has nothing to do with virtualization, it's all due to the improved hardware. Once you start sharing the system with other applications, the advantage of better hardware tends to drop.
A CPU spike to almost 100% isn't bad by default, either. All it means is the system is using all the resources available. 😉
If it's a well-tuned OLTP system, the number of CPU's available to the system might not make much of a difference (assuming the same total amount of CPU power). But if it's an OLAP system with a lot of parallel processes, it might be significant.
From my point of view, the major improvement can be achieved when tuning a database system is to tune schema, queries and indexing.
September 10, 2011 at 12:19 pm
LutzM (9/10/2011)
can virtualizing the DB enable us to cut our CPU power by a factor of 6?
No and yes.
Virtualization cannot cut the absolute CPU power an application requires. But it can cut the relative CPU power. In your case: if the absolut CPU power provided by the virtual host is -as an example- 8x to what you have right now, then the migration will allow to cut the CPU power. The number of cores is not enough of a figure to answer that question... But that has nothing to do with virtualization, it's all due to the improved hardware. Once you start sharing the system with other applications, the advantage of better hardware tends to drop.
A CPU spike to almost 100% isn't bad by default, either. All it means is the system is using all the resources available. 😉
If it's a well-tuned OLTP system, the number of CPU's available to the system might not make much of a difference (assuming the same total amount of CPU power). But if it's an OLAP system with a lot of parallel processes, it might be significant.
From my point of view, the major improvement can be achieved when tuning a database system is to tune schema, queries and indexing.
Thanks Lutz,
The CPU spikes are indeed expected, but we're seeing them quite often and they're running for longer periods of time than we want. Many of our jobs appear to be CPU bound. Anyone else trying to use the system during these times has noticed a definite decrease in performance, and we only have 4-5 developers hitting the system at the moment.
We haven't even started on any sort of tuning, as our iterative dev cycles tend to have consequences for previous iterations that can wipe out any sort of gains we may get. We've only got the most basic of indexes defined.
All indications so far are that our new infrastructure/schema/app layers are all performing worse than what they're meant to replace, and the big glaring elephant in the room, for me, is that we've gone from this 24 core 2.8GHz Xeon box to the blades and VMWare where SQL now only has 4 2.8 GHz VCPUs - not even a full physical processor on the blade (they're hex core).
Also of great concern is the licensing. We're currently running Standard on the Virtual SQL, and the MS license model appears to take a virtual CPU as a 'physical' CPU, whereas if you have a multi-core CPU the license is counting the socket as the 'physical' CPU. So to even upgrade the CPU power in the virtual environment we're looking at needing to go with Enterprise or higher. Basically if we find that the virtualized environment doesn't have enough processing power and we've built other aspects of the infrastructure around it (i.e., failover using vSphere) that dictate that we stick with it, our only option is to spend a fortune on Enterprise or Datacenter.
September 10, 2011 at 12:50 pm
Kevin Dahl (9/9/2011)
... seems to think that we can replace a 24 core dedicated HW machine with a 4 VCPU install on blades/VMWare.
Here is how it works.
When you do capacity planning on physical hardware you have to plan for the peak while when you do capacity planning on virtualized hardware you plan for the average load and rely on bursting to take care of peak load.
OLTP systems in particular usually present two peak loads a day - which usually account for about six hours of operation a day.
If you have a physical host you have to have the processing power in place to take care of peak process having them underused processing power the rest of the day.
If you have a virtualized host you plan for average load and burst - add CPU power - during peak load.
Hope this clarifies.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.September 10, 2011 at 1:34 pm
Kevin Dahl (9/10/2011)
...All indications so far are that our new infrastructure/schema/app layers are all performing worse than what they're meant to replace, and the big glaring elephant in the room, for me, is what we've gone from this 24 core 2.8GHz Xeon box to the blades and VMWare where SQL now only has 4 2.8 GHz VCPUs - not even a full physical processor on the blade (they're hex core)....
I strongly recommend not to change everything at once. You won't be able to tell the real cause of performance differences (for the better or the worse).
As it sonds it won't be possible to separate the app and the schema modification. So you might need to accept that.
But you definitely should compare the different version on the same hardware. In your scenario I would test the new as well as the old software version on both, the old and the new system. Imight not even do the second step (system comparison) before I know how the new software version runs on the "old" machine.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply