Virtualise production OLAP Server

  • Hi, we are looking to take on a system from a subsidiary company and have a couple of options with their OLAP platform, currently using SQL2005.

    The question therefore is, should we virtualise a production OLAP Server (Hyper-V), are the concerns doing this different to those of an OLTP database? What questions should I be asking if we do go down this route (I'm by no means an OLAP expert). Does it depend on the frequency of aggregation runs and cube builds? Does size matter!! etc...

    Any help, or pointers to other documents would be appreciated.

    Thanks...Simon

  • I haven't heard much about this, though I'd think that IO would be as much, or more of an issue. Depending on the cube building, I think a lot of disk activity will take place and you'll need good, dedicated storage.

  • the new nehalem CPU's just came out and HP's 1U servers based on them support up to 144GB of RAM. Intel also said that instead of milking this for a few years they will release their next generation 32nm CPU's early next year and the RAM capacity will probably double again.

    not sure about hyper-v, but vmware has an option called P2V. you run a script and it scans a system and automatically creates a virtual machine for you. ESX server is now free, but not sure if the p2v option for it is free or needs a license

  • I think most of the virtual products can create a VM from a physical installation. HyperV has this as well.

  • Hey there,

    From our experience with virtualising SQL Server (OLTP and OLAP) using fairly high-end hardware and Win2k8, I feel comfortable summarising the team's experiences as: just don't do it!

    Whatever theory might say, performance sucked more than a Dyson, and it wasn't for lack of effort!

    SQLy stuff really is best on dedicated hardware, avoiding SANs if possible.

    Cheers,

    Paul

  • That's a bit of a generalisation - all virtualisation = bad and all san = bad.

    It depends on the workload of the environment, it's CPU and Memory footprint as well as other issues such as storage cache, HBA configuration, IO queues, IO threads, disk alignment, cluster size, etc...

    contrary to popular opinion, virtualisation requires thought and design. You can't just whack any old virtualisation platform on some metal and on you go. Like all things; design, review, test, review, repeat, deploy.

    --
    Andrew Hatfield

  • andrew.hatfield (4/26/2009)


    That's a bit of a generalisation - all virtualisation = bad and all san = bad.

    It depends on the workload of the environment, it's CPU and Memory footprint as well as other issues such as storage cache, HBA configuration, IO queues, IO threads, disk alignment, cluster size, etc...

    contrary to popular opinion, virtualisation requires thought and design. You can't just whack any old virtualisation platform on some metal and on you go. Like all things; design, review, test, review, repeat, deploy.

    Not really, no.

    Abstraction never adds performance.

    Running SQL Server/SSAS/SSIS in a VM hurts performance relative to the same spec physical tin.

    Running SQL Server from a SAN hurts performance versus locally attached storage.

    That's not to say it should never be done (there are good reasons for a SAN) but I would personally robustly resist virtualisation of any of our SQL Servers.

    All the things you listed are true regardless of whether the environment is P or V.

    I'm sure virtualisation does require thought and design. My problem with it is that it consumes resources and time and adds no value, except in edge cases.

    Cheers,

    Paul

  • Hi Paul,

    With respect, I disagree. As I said, it depends on your workload. Not all systems are candidates for virtualisation. Often, large footprint systems are inappropriate. For those systems that require uncompromised performance, virtualisation may also be inappropriate. As always, test, measure and review.

    As for virtualisation never adding value that is simply incorrect. Virtualisation provides increased utilisation of hardware through consolidation; flexibility in deployment; flexibility in hardware maintenance; standardisation of hardware through abstraction; increased uptime through HA, vMotion, Storage vMotion and soon Fault Tolerance for zero-downtime (presuming its not the app causing the outage which is irrelevant between virtualisation and bare-metal). DRS provides dynamic resource allocation to ensure that systems receive the resources that are required and moves others around during peak times. The list goes on.

    Granted the above examples talk about VMWare ESX / Vi3. We don't do Hyper-V.

    While I agree that not all systems are appropriate targets for virtualisation, there are certainly benefits.

    I have virtualised many database systems including SQL Server 2000, 2005 and 2008, Oracle as well as Exchange 2003. In most cases, other than being informed and consulted during the exercise, teams did not even notice that the system was now virtualised. In many cases we took the opportunity to rework some of the layout - especially storage - and this realised performance increases.

    That is why infrastructure teams look after infrastructure and DBAs look after databases.

    In 9 out of 10 cases, performance is more likely to be either the application or database - not the infrastructure.

    --
    Andrew Hatfield

  • Here are some links regarding databases on VMWare

    SQL Server Performance in a VMWare Infrastructure 3 Environment (VMWare and HP)

    www.vmware.com/files/pdf/SQLServerWorkloads.pdf

    VMWare and Dell Whitepaper http://www.vmware.com/pdf/SQL_Server_consolidation.pdf

    Virtualised SQL Server Performance: Scalable and Reliable http://blogs.vmware.com/performance/2008/11/database-worklo.html

    Customer Success Stories Featuring ESX Server http://www.vmware.com/customers/stories/

    http://www.vmware.com/solutions/business-critical-apps/sql/

    Microsoft support statements for SQL Server in a virtualised environment

    Support policy for Microsoft SQL Server products that are running in a hardware virtualization environment

    http://support.microsoft.com/kb/956893/

    Technical Support for Validated Configurations

    http://www.windowsservercatalog.com/svvp.aspx?svvppage=svvpsupport.htm

    Products that have passed the SVVP requirements for Windows Server

    http://www.windowsservercatalog.com/results.aspx?&bCatID=1521&cpID=0&avc=0&ava=0&avq=0&OR=1&PGS=25&ready=0

    Microsoft server software and supported virtualization environments

    http://support.microsoft.com/kb/957006/

    --
    Andrew Hatfield

  • Andrew,

    There too much there to respond to line-by-line, so I would just say:

    I have worked in two large international companies now that experimented with ESX virtualization. Neither were conspicuous successes. I can read white papers and success stories all day long, but that is no substitute for experience. Let us take an example.

    Consider a single physical eight-core machine machine with 32GB RAM.

    There are eight existing physical SQL Servers dotted around which we would like to consolidate. The existing SQL Servers are dual-core machines with 4GB RAM each. The workload is such that each box utilizes its hardware fairly fully from time to time, but typically the activity peaks do not coincide in time over different servers.

    How would you partition this box for virtualization, and how would that compare to eight instances of SQL Server on the physical server, each configured to use all processors, and having a max memory setting of just under 4GB per instance.

    How would hardware utilization be improved by using virtualization? What other benefits would it bring?

    Cheers

    Paul

  • Understanding the current setup would be the first thing I'd look at. Look at the peak loads as well as normal average loads.

    And if you are virtualizing, this is generally done with multiple VM's on a bigger piece of hardware. So be aware of other loads on the machine.

    We gear ourselves more for the peak loads (cube build), as we rebuild every night. This is more intensive than our users.

    We're exploring this with Hyper V. Although we would only host SQL, SSAS, and our web servers on a dedicated BI server.

    SQL 2008 offers some pretty substantial inprovements in performance, which is part of our plan. You may want to look at this.

    I'd recommend testing first if possible. Nothing like seeing if it will work in your case.

    When we virtualized a web server with several applications, we went in with a contigengcy plan. If it wasn't working we were prepared to get another physical box.

    We also use x64 bit hardware and software. My workstation is running W2008 Hyper V x64 bit. I've been very pleased so far. It also allows me to test with both 32 bit and x64 bit VM's.

    Greg E

  • Hi Paul,

    I've also worked for large organisations, but that's not the point. By definition those success stories are based on experience.

    Looking at your example and highlighting that it is is very light on details, I wouldn't consolidate those instances on a bare metal server as described, let alone virtually. In your example, you are actually reducing the resources available to each SQL instance - memory, CPU and IO.

    If you tried this, then I'm not surprised your experience with virtualisation has been unsuccessful.

    I wouldn't host more than 4 SQL instances on any instance of Windows - virtual or physical. Where I am currently working, a government organisation with approximately 10,000 staff across the most de-centralised state in Australia, we are going through a SQL and hardware refresh of a number of systems.

    This includes upgrading SQL as well as virtualising where appropriate. For those systems that are being virtualised, the current approach is to scale-out. We will only host one (1) SQL instance on any VM across many VMs. This includes existing environments that are currently clustered using MSCS.

    As I have said a number of times in this thread, you can't just throw stuff on servers, hope for the best and complain when it doesn't work. You need to review the current and future requirements, design an appropriate solution, test it, review and improve and then deploy if suitable.

    I hope this helps. I'm happy to provide assistance out of band if you wish.

    As virtualisation becomes more widely deployed and matures further; performance, management and experience will increase. Remember, it's only in the past 10 years or so that people have started takign SQL Server seriously over Oracle.

    --
    Andrew Hatfield

  • Sorry, I missed the last bit in my reply.

    Paul White (4/26/2009)


    How would hardware utilization be improved by using virtualization? What other benefits would it bring?

    Generally, you virtualise a number of systems in a cluster / farm. So you would benefit from a number of systems reducing from one physical server each to 2 - 4 physical servers. Thus an increase in physical utilisation and the other cost savings that come with it (power, cooling, data centre footprint, cabling, network and storage ports)

    Other benefits include

    - abstraction of hardware

    You can take your installation and put it on any of the virtual servers without having to worry about drivers and chipsets.

    When you refresh your virtualisation platform, you don't need to worry about reinstalling your production systems

    - increased uptime and availablity

    With vMotion and Storage vMotion, you don't have outages when you need to perform scheduled maintenance of physical hardware

    With HA, VMs will automatically come back up in the event of a failure

    With Fault Tolerance (coming in vSphere 4), you get service continuity if a VM or host goes down due to constant memory replication across the virtualisation farm

    - DR is much easier

    Replicating VMs is much easier (and more cost effective) than physical servers

    Depending on your choice of solution, failover may even be automatic across sites

    Testing can become and out of band, non production interrupting exercise

    Obviously there is a cost to virtualisation

    - licensing

    - training of support staff

    - there is a management overhead as you're now managing another layer

    However, in saying that the overall TCO significantly reduces when you roll out virtualisation across your enterprise. As always, there is a cost / benefit tipping point. You need to pick your own.

    --
    Andrew Hatfield

  • andrew.hatfield (4/27/2009)


    By definition those success stories are based on experience.

    ...or marketing deals :laugh:

    andrew.hatfield (4/27/2009)


    Looking at your example and highlighting that it is is very light on details, I wouldn't consolidate those instances on a bare metal server as described, let alone virtually. In your example, you are actually reducing the resources available to each SQL instance - memory, CPU and IO.

    I wouldn't host more than 4 SQL instances on any instance of Windows - virtual or physical.

    It is a hypothetical example but a reasonable one. If you do the maths again you will see that 8 instances x 4GB = 32GB (the same) and each instance will have a share of eight CPUs instead of two dedicated cores. I have run four to eight enterprise edition instances on Itanium and x64 in a four-way clustered configuration, and it works very well indeed. I'm glad you see that virtualization would be a poor choice.

    andrew.hatfield (4/27/2009)


    If you tried this, then I'm not surprised your experience with virtualisation has been unsuccessful.

    Not me - I'm not qualified, but I did assist the guys that were. 12 core machine 32 GB RAM ESX, multiple SQL Servers hosted in separate VMs on W2K8. It performed like a slug and made very poor use of what was a decent box.

    andrew.hatfield (4/27/2009)


    Where I am currently working, a government organisation with approximately 10,000 staff across the most de-centralised state in Australia, we are going through a SQL and hardware refresh of a number of systems. This includes upgrading SQL as well as virtualising where appropriate. For those systems that are being virtualised, the current approach is to scale-out. We will only host one (1) SQL instance on any VM across many VMs. This includes existing environments that are currently clustered using MSCS. As I have said a number of times in this thread, you can't just throw stuff on servers, hope for the best and complain when it doesn't work. You need to review the current and future requirements, design an appropriate solution, test it, review and improve and then deploy if suitable.

    I work at a place not too far from you that serves 50 million web pages a day, with the busiest SQL Servers in the country. We do use virtualization - for testing only :laugh:

    Cheers,

    Paul

  • andrew.hatfield (4/27/2009)


    Generally, you virtualise a number of systems in a cluster / farm. So you would benefit from a number of systems reducing from one physical server each to 2 - 4 physical servers. Thus an increase in physical utilisation and the other cost savings that come with it (power, cooling, data centre footprint, cabling, network and storage ports)

    You also lose hardware and physical location redundancy. The original farm or cluster servers would have had to be under-utilised to stand consolidation on fewer boxes. Presumably those boxes would need to be more powerful (=expensive). Do Google run a grid of commodity servers or several VM'd boxes I wonder? 😉

    andrew.hatfield (4/27/2009)


    You can take your installation and put it on any of the virtual servers without having to worry about drivers and chipsets.

    Cool for testing. Any decently set-up production environment uses cloned machines anyway.

    andrew.hatfield (4/27/2009)


    When you refresh your virtualisation platform, you don't need to worry about reinstalling your production systems. With vMotion and Storage vMotion, you don't have outages when you need to perform scheduled maintenance of physical hardware

    We find that dynamically load-balanced farms and physically separated data centers work better.

    andrew.hatfield (4/27/2009)


    With Fault Tolerance (coming in vSphere 4), you get service continuity if a VM or host goes down due to constant memory replication across the virtualisation farm

    Sounds like a virtualization-specific issue. PolyServe Matrix Server is part of our real high-availability solution.

    andrew.hatfield (4/27/2009)


    Obviously there is a cost to virtualisation

    - licensing

    - training of support staff

    - there is a management overhead as you're now managing another layer

    Add poor performance to the list.

    andrew.hatfield (4/27/2009)


    However, in saying that the overall TCO significantly reduces when you roll out virtualisation across your enterprise.

    That just sounds like advertising to be honest.

    Paul

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply