Historically, the component on blade servers that has most limited the number of virtual desktops you can run has been the amount of RAM. The UCS B250 blades support up to 384 GB of memory (or if you are reducing cost you can use 4GB chips instead of 8GB for a total of 192GB) for two processors and provides 40Gbps of I/O thoughput for the blade. This effectively increases the density of desktops per server to the maximum capacity of the two Westmere processors. If you use the current supported core overcommit limit (8:1) with Hyper-V as a ball park figure this works out to around 192 virtual machines each with about 1.5GB of RAM on a fully-loaded blade.
The UCS management platform is designed in such a way that you can delegate administrative controls for each component separately. For instance, the networking and storage components can be managed separately by individuals with expertise in their associated area without any need to worry even though the components are integrated into a single computing system. Furthermore, the QOS policies affecting both network and storage traffic can be managed separately associated with different levels of service based on the organizational membership of the server. Configuration and operational polices can be configured at the local and global levels providing granular control of all objects in the system.
When I think of the cloud, I think of unlimited computing resources that can assume any identity necessary for the duration of my need. After watching how the UCS system manages the resources I believe it comes closer than any other system I have seen so far. UCS has a stateless management system where configuration, access, and policy management comes from the devices location within the hierarchical management system. Through the use of service profiles and identity pools (MAC, WWPN, WWNN, UUID) a device can be discovered, configured, and brought online with almost no effort at the time of installation. This configuration can go all the way down to the BIOS version of each component within the blade. That same device can be moved or reconfigured in place and have a complete different identity within the time it takes to boot, all with a few mouse clicks.
UCS was designed with virtualization in mind. The blades start with Intel’s virtualization processor line, including Nehalem and Westmere multi-core processors. The chipsets on the motherboards use Intel virtualization technologies like VT-d for Direct I/O and VT-c for Connectivity. Platform technologies like Virtual Machine Direct Connect (VMDc) and Virtual Machine Device Queues (VMDq) enable support for Hyper-V’s (VMQ, SR-IOV), XenServer (SR-IOV), and VMWare (VNLink, NetQueue, VMDirectPath) virtualization features. Support for these features means that the hypervisors will be able to extract the maximum performance possible from the hardware.
Finally, the UCS system integrates with a myriad of third-party management systems through an open XML API. Companies like BMC, CA, EMC, IBM, and Microsoft already have already begun integrating their products directly with the UCS management system. In fact, Cisco has an emulator that can be used for product development without the need to purchase and configure the hardware during the development cycle. Some of these integrations take the dynamic data center to the next level by fully automating the provisioning of new systems based on performance and management analytics.
UCS and XenDesktop
So this all sounds great…but what does it have to do with a XenDesktop deployment in the cloud? Good question, let’s look at how these UCS features can be applied to a XenDesktop cloud environment.
With UCS technology you can replace a failed hypervisor server blade with a brand-new blade and by the end of the boot cycle the new blade will have assumed the previous blade’s identity (hostname, MAC address, IP address, storage WWN, etc.) complete with access to the virtual machines previously hosted on that blade. One nice thing about UCS is that after the boot cycle, even the hardware (network adapter, storage adapters, mainboard, etc.) on the blade would have the same BIOS versions as the previous occupant, guaranteeing image compatibility. I don’t know about you, but I can see how this would help minimize downtime for my XenDesktop cloud and improve my SLA statistics.
Workflow monitoring tools like Citrix Workflow Studio or Microsoft’s Opalis have not been fully integrated into UCS and XenDesktop as of yet, but once they have been, you will be able to bring a new level of SLA to your cloud. Consider that with these tools a hypervisor could be brought online and automatically receive 150 desktops based on a business logic monitoring the utilization of the XenDesktop farm and the number of available desktops to handle the workload. Conversely, as the demand for the desktops decreases (more enter the idle state), the idle and live desktops can be migrated to separate servers so the hypervisors with idle desktops can be shutdown. Furthermore, the workflow tools would alloy you to create business logic to automatically move desktops from one server/chassis to another server/chassis based on QoS commitments without the need to reconfigure administrative access to the new server or desktop regardless of where it exists in the architecture.
All this functionality is provided in a fault-tolerant, highly-redundant configuration with world-class networking support. I have to say it seems like the UCS platform was designed to host XenDesktop. If you want to know how XenDesktop does on the UCS platform, check out the hypervisor reports for XenServer and VMWare vSphere .