Multiple instances in one server

  • YW.

    Now that it was made clear that this is a VM you may wish to rethink it - by using VM instead of physical you are loosing on cpu power.

    physical - 16 cores - 32 threads - license required for 16 cores only but will use all 32 threads
    virtual - 16 physical cores, 32 threads presented to vm, only 16 used per license as on a vm each thread equates to a vCpu

    Not to patronize but make sure your vm person is aware of the best practices http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf

  • frederico_fonseca - Wednesday, March 14, 2018 3:27 PM

    YW.

    Now that it was made clear that this is a VM you may wish to rethink it - by using VM instead of physical you are loosing on cpu power.

    physical - 16 cores - 32 threads - license required for 16 cores only but will use all 32 threads
    virtual - 16 physical cores, 32 threads presented to vm, only 16 used per license as on a vm each thread equates to a vCpu

    Not to patronize but make sure your vm person is aware of the best practices http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf

    I'm not sure he's aware of anything...
    From this CPU question and other things... The server was initially configured the have RAID5 and the system is very, very write intensive... 
    Then he changed for a 2 tier disk system (SSDs with 10K disks, writes and reads). But honestly I ran crystal disk mark and I got for sequential read 180MB/s and 80MB/s for writes... That is not SSD class..
    I have another machine, physical (I always go for physical but this client has everything virtualised...), with NVM disks... 1600MB/s reads and 900MB/s writes...
    They have 15 users using the system, they are just in a test stage with the server, and the database has 50ms stalls / reads and tempdb (the app is an ERP system that uses tempdb a looootttt) has 210ms stalls / write... very very bad...
    They will have to review the architecture they designed...
    Their current system, not this one they are migrating to, is even worst.. They have a massive RAID5 partition and all VMs, 6, "share" the RAID instead of splitting the RAID to make sure each VM can have its own RAID...
    So the disk controller gets chocked a lot..



    If you need to work better, try working less...

Viewing 2 posts - 16 through 16 (of 16 total)

You must be logged in to reply to this topic. Login to reply