IO comparison: Proxmox KVM (raw, qcow2, vmdk) and VmWare ESXi 5.1 (LSI SAS, pvSCSI) latest


1) QCOW2 convert to VMDK
qemu-img convert [-c] [-p] [-f fmt] [-t cache] [-O output_fmt] [-o options] [-S sparse_size] filename output_filename
qemu-img convert -f qcow2 -O vmdk -o adapter_type=lsilogic,subformat=streamOptimized,compat6 vm-1301-disk-1.qcow2 vm-1301-disk-1.vmdk

-f First image format
-O output_fmt
-o options

qemu-img convert -O vmdk -o ? source dest.vmdk

root@~# qemu-img convert -O vmdk -o ?
Supported options:
size             Virtual disk size
adapter_type     Virtual adapter type, can be one of ide (default), lsilogic, buslogic or legacyESX
backing_file     File name of a base image
compat6          VMDK version 6 image
hwversion        VMDK hardware version
subformat        VMDK flat extent format, can be one of {monolithicSparse (default) | monolithicFlat | twoGbMaxExtentSparse | twoGbMaxExtentFlat | streamOptimized} 
zeroed_grain     Enable efficient zero writes using the zeroed-grain GTE feature


2) vmkfstools -i  /vmfs/volumes/11/vm-1301.vmdk /vmfs/volumes/11/vm-1301.vmdk 








Comparison of setups:
1) Fujitsu Server + 8Gbps FC + Storage 24 HDD @10k
2) HP Server + 2Gbps FC + Storage RAID10 6 HDD @10k
3) HP Server + 6Gbps local SCSI + RAID10 6 HDD @10k
4) Simple PC + SSD


PC and HDD/SSD:

READWRITE
DISKseq128KB-Q32-1Trand-4KB-Q32
1Thread
iopsseq-1MB-1Q
1 Thread
4KB-1Q
1Thread
iopsseq128KB
Q32-1T
rand-4KB-Q32
1Thread
iopsseq-1MB-1Q
1 Thread
4KB-1Q
1 Thread
iops
Hitachi HDS721050CLA362 500GB 7200530.5119750.374750.6143680.5130
OCZ agility 3, 480GB, SATA3186174134184153600112194670147194555
Intel 520, 500GB, SATA3226235680223163900164297100147297040



Servers and Storage:
Result Comparison for 4 setups




Servers and Storage:




Setup nr. 1

HOST Specs:
Fujitsu Blade
CPU: 12x 2.4 GHz Xeon E5-2620 v3
RAM: 96 GB
Storage: 3PAR FC 24x HDD 10k SAS  (via FiberChannel 8Gbps)
Single HOST assigned to this LUN


Soft Used:
Proxmox VE v 4.2-56 (running kernel: 4.4.13-1-pve) + LVM
ESXi 5.1 v 2323236 + VMFS8
CrystalDiskMark v 5.1.2 x64

VM Specs:
for pve: Win7 Ultimate 64b (clean install) + virtio drivers (IO + network)
for vmware: Win7 Ultimate 64b (clean install) + pvSCSI drivers (in case of pvSCSI controller)
CPU: 2x4 vCPUs
RAM: 16 GB
HDD: 50 GB
Single running VM on this HOST

Tests Done:
PVE raw: no-cache/writeback/writethrough/direct-sync
PVE qcow2: no-cache/writeback/writethrough/direct-sync
PVE vmdk: no-cache/writeback/writethrough/direct-sync

ESXi LSI-SAS (Windows 7 - do not req additional drivers for HDD)
ESXi pvSCSI (Paravirtual SCSI controllers, which req drivers for Windows 7)

Info:
VmWare:
 - http://www.vmware.com/files/pdf/1M-iops-perf-vsphere5.pdf
 - Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters

PVE KVM:
 - http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf

Results:

proxmox kvm raw

proxmox kvm qcow2

proxmox kvm vmdk

vmware esxi 5.1



Another setup:

HOST Specs:
HP Server
CPU: 24 @ 2.6 GHz (4 CPU * 6 Sockets) AMD Opteron
RAM: 256 GB
 - Storage1: IBM Storage 6 x HDD 10k SAS  (via FiberChannel 2Gbps)
    Single HOST assigned to this LUN

Storage2: Local HP SCSI Controller RAID10 6xHDD 10k SAS
    Single HOST assigned to this LUN

Soft Used:
Proxmox VE v 4.2-56 (running kernel: 4.4.13-1-pve) + LVM
ESXi 5.1 v 2323236 + VMFS8
CrystalDiskMark v 5.1.2 x64

VM Specs:
for pve: Win7 Ultimate 64b (clean install) + virtio drivers (IO + network)
CPU: 2x4 vCPUs
RAM: 16 GB
HDD: 50 GB
Single running VM on this HOST



VMWARE nested

ESXI->VM(win2012)
cpu..183-Billions (Burn-in-test v6.0)

READ :187/16.8 (4116)/196/11.6 (2822) intel-ssd
WRITE:141/18.4 (4480)/153/14.4 (3226) intel-ssd


ESXI->VM(ESXI)->VM(xp-nested)
cpu..110-Billions

READ : 95/1.1 (270)/106/0.8 (208) hdd-raid10-6hdd-7200
READ :102/4.0 (966)/ 96/3.5 (858) intel-ssd

WRITE: 21/0.7 (183)/ 20/0.6 (136) hdd-raid10-6hdd-7200
WRITE: 81/2.9 (700)/ 85/2.4 (575) intel-ssd