TranslateProject/sources/talk/20140624 Performance benchmarks--KVM vs. Xen.md
2014-06-24 16:20:03 +08:00

9.5 KiB
Raw Blame History

Performance benchmarks: KVM vs. Xen

After having some interesting discussions last week around KVM and Xen performance improvements over the past years, I decided to do a little research on my own. The last complete set of benchmarks I could find were from the Phoronix Haswell tests in 2013. There were some other benchmarks from 2011 but those were hotly debated due to the Xen patches headed into kernel 3.0.

The 2011 tests had a good list of benchmarks and Ive done my best to replicate that list here three years later. Ive removed two or three of the benchmark tests because they didnt run well without extra configuration or they took an extremely long time to run.

Testing environment

My testing setup consists of two identical SuperMicro servers. Both have a single Intel Xeon E3-1220 (four cores, 3.10GHz), 24GB Kingston DDR3 RAM, and four Western Digital RE-3 160GB drives in a RAID 10 array. BIOS versions are identical.

All of the tests were run in Fedora 20 (with SELinux enabled) for the hosts and the virtual machines. Very few services were left running during the tests. Here are the relevant software versions:

  • Kernel: 3.14.8
  • For KVM: qemu-kvm 1.6.2
  • For Xen: xen 4.3.2

All root filesystems are XFS with the default configuration. Virtual machines were created using virt-manager using the default configuration available for KVM and Xen. Virtual disks used raw images and were allotted 8GB RAM with 4 virtual CPUs. Xen guests used PVHVM.

Caveats

One might argue that Fedoras parent owner, Red Hat, puts a significant amount of effort into maintaining and improving KVM within their distribution. Red Hat hasnt made significant contributions to Xen in years and they made the switch to KVM back in 2009. Ive left this out of scope for these tests, but its still something worth considering.

Also, contention was tightly controlled and minimized. On most virtualized servers, youre going to have multiple virtual machines fighting for CPU time, disk I/O, and access to the network. These tests didnt take that type of activity into consideration. One hypervisor might have poor performance at low contention but then perform much better than its competitors when contention for resources is high.

These tests were performed only on Intel CPUs. Results may vary on AMD and ARM.

Results

The tests against the bare metal servers served as a baseline for the virtual machine tests. The deviation in performance between the two servers without virtualization was at 0.51% or less.

KVMs performance fell within 1.5% of bare metal in almost all tests. Only two tests fell outside that variance. One of those tests was the 7-Zip test where KVM was 2.79% slower than bare metal. Oddly enough, KVM was 4.11% faster than bare metal with the PostMark test (which simulates a really busy mail server). I re-ran the PostMark tests again on both servers and those results fell within 1% of my original test results. Ill be digging into this a bit more as my knowledge of virtios internals isnt terribly deep.

Xens performance varied more from bare metal than KVM. Three tests came within 2.5% of bare metal speeds but the remainder were anywhere from 2-4x slower than KVM. The PostMark test was 14.41% slower in KVM than bare metal and I found that result surprising. I re-ran the test and the results during the second run were within 2% of my original results. KVMs best performing CPU test, the MAFFT alignment, was Xens second worst.

Ive provided a short summary table here with the final results:

 
Best Value
Bare Metal
KVM
Xen
C-Raylower35.3535.6636.13
POV-Raylower230.02232.44235.89
Smallptlower160162167.5
John the Ripper (Blowfish)higher30262991.52856
John the Ripper (DES)higher7374833.57271833.56911167
John the Ripper (MD5)higher4954848899.546653.5
OpenSSLhigher397.68393.95388.25
7-Ziphigher12467.512129.511879
Timed MAFFT Alignmentlower7.787.7958.42
CLOMPhigher3.33.2853.125
PostMarkhigher366738243205

If youd like to see the full data, feel free to review the spreadsheet on Google Docs.

Conclusion

Based on this testing environment, KVM is almost always within 2% of bare metal performance. Xen fell within 2.5% of bare metal performance in three out of ten tests but often had a variance of up to 5-7%. Although KVM performed much better with the PostMark test, there was only one I/O test run in this group of tests and more testing is required before a clear winner in disk I/O could be found.

As for me, Id like to look deeper into how KVM and Xen handle disk I/O and why their results were so different. I may also run some tests under contention to see if one hypervisor can deal with that stress with better performance.

Id encourage readers to review the list of benchmark tests available in the Phoronix test suite and find some that emulate portions of their normal workloads. If your workloads are low CPU and high I/O in nature, look for some of the I/O stress tests in the suite. On the other hand, if you do a lot of audio/video transcoding, try some of the x264 or mp3 tests within the suite.

UPDATE: Chris Behrens pointed out that I neglected to mention the type of virtual machine I tested with Xen. I used PVHVM for the tests as its the fastest performing option for Linux guests on Xen 4.3. Keep in mind that PVH is available in Xen 4.4 but that version of Xen isnt available in Fedora 20 at this time.


via: http://major.io/2014/06/22/performance-benchmarks-kvm-vs-xen/

译者:译者ID 校对:校对者ID

本文由 LCTT 原创翻译,Linux中国 荣誉推出