VMware vSphere5

VSphere 5.0, the latest iteration of VMware Inc.‘s (NYSE: VMW) “Cloud Operating System,” boasts a wealth of updates, including new tools to manage fleets of VMs, and vast tiers of virtualized, vMotion-enabled storage links.

Since storage powerhouse EMC owns a significant chunk of VMware, we always wondered when the storage dimension would become exploited more heavily, and we got the answer. But the storage-related enhancements are by no means EMC-specific.

VMware’s feature list comes at a price per processor that ranges from as low as US$85 to US$3,495. And that doesn’t cover the cost of Acceleration Kits. What you get with vSphere 5 is a feature set of inclining gradients that include stronger storage virtualization options, large and wide hardware support (in terms of vCPUs per VM, memory, and storage) and new capacity to roll-out and “life cycle” VMs at a very fast pace.

Preventing server sprawl is accomplished in the way vSphere controls the VMs and their storage as objects.

With so many tiers of feature support, it can be confusing and VMware had to send a cheatsheet so that we could keep track of what was available in what type of license. The graduations of licensing will require a spreadsheet analysis for most organizations.

IP address issues addressed

We performed both a bare metal install and an upgrade of our existing VMware vSphere 4 installation. Our small network operations center (100+ cores in six test servers, and Dell-Compellent SAN) isn’t the best place to hammer vSphere 5, but we were able to give it a bit of a workout. (Note that support for ESX VMs has expired in vSphere 5.)

VCenter, vSphere’s central control app, can now be run as a VM appliance if desired; the appliance runs SUSE Linux, and is lightweight. Other executables still have Windows executable equivalents if you need them; we didn’t.

The initial upgrades went smoothly, save for the fact that the vSphere installers misidentified the name of our Active Directory domain, a small problem that had us scratching our heads.

There are incumbent steps to upgrade VMware’s virtual switch appliance, and the new strategy removes a lot of IP addressing problems that existed in the prior release.

VSphere rounds into form

IP addressing can be a problem for administrators when moving VMs around, especially from facility to facility, as each is likely to have their own location-endemic addressing allocation needs.

The prior version of vSphere, while allowing for a bit of location-diverse addressing, didn’t have strong multi-site transparency. The new virtual switch takes care of a lot of the misery for both IPv4 and IPv6 addressing schemes. It’s not quite ideal, and some administrative functions must be done outside of the appliance, but its visuals allow a more inter-site understanding of addressing needs and allocations.

Thin-provisioning options

We used both our lab and our NOC resources to launch varying sized VMs of different operating system types – mostly Windows 2003/2008 R2 and Red Hat, CentOS, and Ubuntu Linux. There was no mystery. VM conversions were unmentionably easy, save for some important characteristics: we now had up to 32vCPUs per virtual machine (with advanced licensing option cost), and could see a tremendous amount of oversubscribed (if set) memory and storage. (See how we conducted our test.)

It’s possible to thin-provision (oversubscribe, under-allocate in actuality) almost every operational characteristic of a VM. Doing so has benefits, depending on the settings we used, and allows vSphere to make recommendations, or simply move VMs from one server to another to manage actual needs, vs. initial judgments.

In doing so, VMware has also met a checklist item with over-subscription capabilities for those needing multi-tenancy options, as thin-provisioning permits “elbow room” that can be later physically provisioned when tasks and campaigns mount up.

In other words, less needs to be known about actual server behavior, as VMware can be set to move VMs around to match their execution needs, even when those needs have been capped/throttled by an administrator. Using set guidelines, vSphere will refit VMs into servers to adjust workloads and demands. Control over what VMs go where can be very highly defined and rigid, but ability to fit VMs into hardware servers based on their performance characteristics takes a little time as it’s based on accumulated observations of behavior.

It took nearly a day before vSphere started to move things around, although we could have made it more sensitive (and move more quickly to adjust), but we wanted to see what it would do.

We noted that several improvements have been made to both online error messages, and VMware’s notoriously obtuse docs, as well. That said, VMware’s UIs when used by a browser access are difficult and error messages can sometimes be totally missing.

We ascribed part of this to the fact that it was a brand-new release, yet we were occasionally frustrated with web access interaction with the new appliance. We noted that interactions allowed us to scramble ports, and used SSL where that was appropriate. Overall, there was a stronger security feel.

We tested fault tolerance and auto-controlled/manually suggested VM movement. As we launched certain VMs, we forced them into make-work applications to analyze their CPU use. VMware picks up on CPU with a bit more sensitivity, we found, but other behavioral characteristics can force a move, too.

We decided to attack one Linux app with lots of artificial IP traffic. Almost like a waiter moving customers in a restaurant, the VM was moved across to another server on the same VLAN – whose traffic was essentially nil. Downtime was about four seconds or less in our trials.

Advanced storage features

More interesting, however, is how our Dell Compellent SAN resources can be used, and we tested these resources without the soon-to-be-delivered glue software from Dell specifically for VMware vSphere 5. These resources are also potentially expensive to use, depending on needs and the license type chosen:

High Availability, VMotion (move those VMs around), and Data Recovery are in the Standard and Advanced Versions, ranging from US$395 to US$995 and limited to eight vCPUs/VM.

Add in Virtual Serial Port Connector (a Luddite but useful feature), Hot Add (CPUs, memory, vDisks), vShield Zones (of fault protection), Fault Tolerance (detect, move), Storage APIs for Array Integration, Storage vMotion (move your VMs and/or storage live), and the Distributed Resource Scheduler and Distributed Power Management, and you’ve hit the vSphere 5 Enterprise License. That’s US$2,875 per processor and limited to eight vCPUs.

If you go all the way to vSphere Enterprise Plus at US$3,495 per processor, you can graduate to 32 vCPUs per VM, and add the aforementioned Distributed Switch, I/O Controls for network and storage, establish Host Profiles and have Profile-Driven Storage, use the Auto Deploy (intelligent and automagic VM launch, and use the Storage Distributed Resource Scheduler (Storage DRS).

For mission critical applications, Storage DRS may be worth the price of admission for some. When a compatible array is used, one can group disk resources as an object, and move the whole object (active disks and all) to another part of the array. This means that aggregated infrastructure can be moved wholesale without outage, as an object, perhaps guided by administratively selected fault detection or just the need for maintenance.

As our Dell Compellent SAN lacked the new drivers, we were unable to perform the heavy lifting promised. You’ll need a high-performance SAN transport to move the data around, Fibre Channel at minimum, but other interfaces like InfiniBand ought to do well – especially for disparate or thickly populated array object movement. Protocols like iSCSI (unless over a dedicated and unfettered 10GB link) are unlikely to be useful unless the transaction times will be small (e.g. not much data to move).

Yet at the bottom end of things, VMware’s High Availability still works marvelously. Moving VMs from host to host and back and forth from the NOC to the lab worked flawlessly, if somewhat encumbered by aperiodicity in our Comcast transport to the NOC. This trick can now be done by all of VMware’s competitors as a basic, but it’s part of VMware’s DNA and it shows.

From a practical perspective, most of VMware’s competitors can do these minimums, but some of the competition suffers from OS version/brand fixation and doesn’t have egalitarian support. Others that have egalitarian OS support have weak storage management, and overall virtualized data center/cloud support.

VMware’s vSphere covers all of the bases as close to the state of the art as any production software we’ve seen. It’s still wickedly expensive, and it’s the one to beat.

Henderson is managing director and Brendan Allen is a researcher for ExtremeLabs, of Bloomington, Ind. Henderson can be reached at [email protected].

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.