Microsoft's Hyper-V R2 vs. VMware's vSphere: A feature comparison
By Scott Lowe
Takeaway: VMware and Microsoft are ramping up their virtualization games with relatively new releases. Scott Lowe compares and contrasts some of the major features in vSphere and Hyper-V R2.
Microsoft was late to the virtualization game, but the company has made gains against its primary competitor in the virtualization marketplace, VMware. In recent months, both companies released major updates to their respective hypervisors: Microsoft’s Hyper-V R2 and VMware’s vSphere. In this look at the hypervisor products from both companies, I’ll compare and contrast some of the products’ more common features and capabilities. I do not, however, make recommendations about which product might be right for your organization.
Table A compares items in four editions of vSphere and three available editions of Hyper-V R2. Below the table, I explain each of the comparison items. (Product note: With the release of vSphere, VMware has released an Enterprise Plus edition of its hypervisor product. Enterprise Plus provides an expanded set of capabilities that were not present in older product versions. Customers have to upgrade from Enterprise to Enterprise Plus in order to obtain these capabilities.)
Table A
Click the image to enlarge.
Max host processors. Indicates the number of physical host processors that can be recognized by the system. Bear in mind that the Windows columns are Windows limits and not necessarily Hyper-V limits.
Max cores/processor. How many processor cores per physical processor are recognized?
Max virtual SMP. In an individual virtual machine, this indicates the maximum number of supported virtual processors. Note: This is a maximum value; not every guest operating system can support the maximum number of virtual processors.
Max host RAM (GB). The maximum amount of RAM recognized by the hypervisor.
Max RAM/vm. The maximum amount of RAM that can be allocated to an individual virtual machine.
Failover nodes. The maximum number of physical hosts that can be clustered together. N/A indicates that failover clustering is not supported for that particular hypervisor edition.
Memory overcommit. Does the hypervisor support memory overcommit? Memory overcommitment is a technique available in vSphere that allows administrators to allocate more RAM to virtual machines than is physically available in the host. There are numerous pro and conarticles about this topic, but it’s clear that having the ability to allocate more resources than are physically available increases overall virtual machine density. The decision to use memory overcommit in a production environment is up to each organization. That said, in my opinion, when used in the right circumstances, I can see great benefit in this feature.
Transparent page sharing. Transparent page sharing is one method by which memory overcommitment is achieved. With this technique, common code shared between virtual machines is, itself, virtualized. Let’s say that you have 100 virtual machines running Windows XP for VDI. Using transparent page sharing, RAM isn’t necessarily a major limiting factor when it comes to desktop density on the server. VMware has an excellent example of this technique in action.
Live Migration/VMotion. The ability for the hypervisor to migrate virtual machines between host servers without significant downtime. This is considered one of the most significant availability benefits provided by virtualization solutions.
Simultaneous Live Migration. Can the product utilize its Live Migration capabilities to move multiple virtual machines simultaneously between nodes?
Live guests per host. The number of virtual machines that can be powered on for a maxed-out host. In the real world, I’d be extraordinarily surprised to see anyone getting close to these limits. Virtualization is a great way to lower costs, but there are limits.
Live guests/HA cluster node. If you’re running your hypervisor in a cluster, this is the maximum number of virtual machines that can be active on any single host in the cluster. For vSphere with update 1, if you have eight or fewer cluster hosts, you can run up to 160 VMs per host. With nine or more cluster hosts, that number drops to 40.
Distributed Resource Scheduler. DRS is a technology that enables the migration of virtual machines between hosts based on business rules. This can be a boon for organizations with strict SLAs.
Snapshots per VM. The maximum number of snapshots that can be taken of an individual virtual machine. A snapshot is a point-in-time image of a virtual machine that can be used as part of a backup and recovery mechanism. I find snapshots incredibly useful, particularly on the workstation side of the equation, where a lot of “playing” takes place.
Thin Provisioning. One decision that has to be made early on in the life of any server (virtual or physical) is how much storage to allocate to the system. Too much storage and you waste valuable disk space — too little storage and services crash. In order to maintain reliable services, most IT shops overprovision storage to make sure that it doesn’t run out; but that conservatism adds up over time. Imagine if you have 100 VMs all with 4 or 5 GB of “wiggle room” going unused. With thin provisioning, you can have the best of both worlds. You can provision enough disk space to meet your comfort level, but under the hood, the hypervisor won’t allocate it all. As space begins to run low, the hypervisor will make more space available up to the maximum volume size. Although thin provisioning shouldn’t be used for massive workloads, it can be a huge boon to organizations that want conservatism without breaking the bank.
Storage Live Migration. This feature enables the live migration of a virtual machine’s disk files between storage arrays and adds an additional level of availability potential to a virtual environment.
Distributed Switch. VMware and Microsoft have virtual switches in their products, but only VMware has taken it one step further with the introduction of vSphere Enterprise Plus’ Distributed Switch. According to VMware, “Distributed Switch maintains network runtime state for VMs as they move across multiple hosts, enabling inline monitoring and centralized firewall services. It provides a framework for monitoring and maintaining the security of virtual machines as they move from physical server to physical server and enables the use of third party virtual switches such as the Cisco Nexus 1000V to extend familiar physical network features and controls to virtual networks.” In short, this new capability increases VMware’s availability and security capabilities.
Direct I/O. The ability for a virtual machine to bypass the hypervisor layer and directly access a physical I/O hardware device. There is limited support for this capability in vSphere; the product supports direct I/O operations to a few storage and networking controllers. Called VMDirectPath I/O, this feature can improve overall performance since it eliminates the “virtualization penalty” that can take place when hardware access is run through the hypervisor. There are some major disadvantages to VMDirectPath; for example, VMotion can’t work anymore because of the hardware need. (Note: This feature is different than direct access to disks, which Hyper-V does support.)
Max. partition size (TB). What is the largest partition supported by the hypervisor? Although VHD-based volumes, such as those used by Hyper-V R2, can be up to 2 TB in size, read this blog by Brian Henderson for insight into maximum Windows partition sizes, particularly if you bypass the VHD option altogether and use disks directly.
Application firewall (vShield). According to VMware “VMware vShield Zones enables you to monitor, log and block inter-VM traffic within an ESX host or between hosts in a cluster, without having to divert traffic externally through static physical chokepoints. You can bridge, firewall, or isolate virtual machine between multiple zones defined by your logical organizational and trust boundaries. Both allowed and blocked activities are logged and can be graphed or analyzed to a fine-grained level.” In other words, you don’t need to run traffic through external switches and routers to protect applications from one another.
Virtual instance rights. This is a Microsoft-only right that can seriously lower the overall cost of running Hyper-V R2 in a Windows-only environment. If you use the Data Center edition of Windows, you can run as many Windows Server-based virtual machines as you like without incurring additional sever licensing costs.
Hypervisor licensing. The method by which the product is licensed. Either per host or per processor.
My school’s hypervisor of choice
At Westminster College, we continue to run VMware for its virtualization services. Why? Mainly because it’s tried and true. That said, budget pressure forces us to constantly reevaluate services and priorities. VMware’s total cost is beginning to become more of an issue. As Microsoft continues to improve Hyper-V R2, we will monitor its progress to determine if and when it might be able to replace VMware, although a possible investment in VMware’s VDI product might lock us into VMware for the long haul.
I like VMware’s memory overcommitment capabilities and believe that, if used right, the feature can be a boon when it comes to density, particularly as we look at virtualizing desktop computers. On the other hand, for very Microsoft-centric organizations, Hyper-V R2 makes Microsoft’s hypervisor offering extremely compelling.
No comments:
Post a Comment