Virtualization in computer science refers to the abstraction of computing resources such as processor cycles, memory, applications and network resources. However, the term usually refers to platform virtualization i.e. computer or operating system virtualization (Tulloch, 2010).
The concept of virtualization was first discussed in 1959 by Christopher Strachey in his paper “Time Sharing in Large Fast Computers”. In the early 1960’s, IBM began to explore the concept of virtualization with its M44/44X and CP-40 research systems leading to the development of commercial CP-67/CMS systems. The concept of the virtual machine separated users while simulating full standalone computer systems for each. In the 1980’s and 1990’s the industry moved from using centralized mainframes for virtualization to running x86 server clusters that were smaller and cheaper thus reducing the popularity of virtualization. However, virtualization regained popularity in 1999 when VMware introduced the VMware workstation that was later followed by VMware’s ESX Server that ran directly on the hardware without requiring a host operating system. More virtualization implementations have since then been developed such as Microsoft’s Hyper-V, VMware Sphere, XEN Server among others (Delap, 2008). The various types of virtualization products in the industry today employ a variety of concepts that include network virtualization, server virtualization, storage virtualization, application infrastructure virtualization, and desktop/client virtualization. For the scope of this paper, desktop virtualization is discussed (Delap, 2008).
Desktop Virtualization and VDI (Virtual Desktop Infrastructure).
Desktop virtualization refers to two complementary virtualization technologies i.e. client hosted and server-based virtualization. Both technologies encapsulate a standard desktop operating system in a virtual machine (VM) that is accessible by users. In client-hosted virtualization, the VM resides and operates on the client machine that runs its own operating system and virtualization software. Server-based desktop virtualization runs multiple VMs on the server and users get a remote display via a PC or a thin client. The technology of choice lies heavily on the organization’s requirements. For example, the server-based solution is suitable for securing enterprise data while the client-hosted solution is suitable where network bandwidth is limited, portability is desired, or for offline use (Petrović & Fertalj, 2009).
The Virtual Desktop Infrastructure (VDI) is the desktop virtualization architectural model that enables client operating systems to run in server-based VMs. The client desktops can run on desktop PCs and thin clients while execution, management and storage of the virtual desktops is done in a data center. A typical VDI consists of a hypervisor, Virtual Machine, Virtual Machine (VM) Manager, and a Connection Broker. The hypervisor is software that hosts and enables virtualization of desktop images. Hypervisors can run directly on the host hardware (bare-metal), and others run as applications on the operating system with the guest OS running at a third level of abstraction over the hardware. The VM manager software allows for easy management of virtual machines where users can create, start and stop VMs. Users can also view and control each VM console, and monitor performance. The connection broker (virtual desktop manager) is an application that manages requests from software and thin clients to the virtual desktop pools located in different network servers. The virtual machine (VM) is a software that emulates a computer and executes tasks in a similar manner, only that the VM is isolated from hardware. Virtual machines can be process or system based. Process VMs run a single program while system VMs run a complete operating system (Petrović & Fertalj, 2009).
Virtual Desktop Infrastructure (VDI) Implementation.
VDI core architectures can be static or dynamic depending on organizational needs. The static mode is preferred when each user is required to have a unique VM i.e. one-to-one mapping of VMs to users. Each user has their own fixed VM and addition of more users requires creation of more VMs. The VMs are stored in network-attached storage (NAS) or storage area networks (SANs), and are presented to users via thin clients or PCs. The dynamic VDI architecture uses a single master virtual machine image that resides on the hypervisor. Whenever a new user enters the system, the VDI automatically replicates the image for the new user, and user applications are distributed according to user profiles and profile access rights. User data is stored on a central server using folder redirection and since only a single VM is used, the management overhead and support costs are reduced, while the dynamic provision of virtual desktop environments is availed on demand (Petrović & Fertalj, 2009).
Benefits of VDI: Advantages and Disadvantages.
VDI has several benefits and limitations. The benefits include improved data security since data is stored on data center servers as opposed to the remote access devices such as PCs and thin clients. Management overhead is also reduced since there is a variety of tools and configuration options for VDIs as opposed to managing hundreds of physical devices with different specifications and configurations. VMs are also easy to start, stop and control as opposed to physical devices. VDIs also ensure compliance with policies, laws and regulations regarding information use since VDI systems are based on templates of virtual machines that are uniformly configured. The cost associated with desktop virtualization and VDI deployment are also reduced since there is no need for hardware upgrades, old physical PCs can be used as servers, and energy costs are reduced since the number of physical devices is minimized. Virtual Desktop Infrastructure is also robust since virtual machine images can be moved or copied easily while the availability of system snapshots allow users to roll back the virtual desktops to previously stable states thus enabling quick data recovery and overall system flexibility. Finally, VDI simplifies integration since VMs can be integrated with weak hardware and thus there is no need to purchase expensive hardware for VDI implementation (Petrović & Fertalj, 2009).
The disadvantages of desktop virtualization include performance degradation (virtual machines have poorer performance than physical devices), low graphics quality, poor multimedia performance, poor support for peripherals, not everything can be virtualized, complexities in root cause analysis when problems arise since all VMs are uniform, and finally VDI is unsuitable for offline use (Petrović & Fertalj, 2009).
Desktop virtualization trends and statistics.
A survey by the International Data Group (IDG) in 2008 showed that desktop virtualization had grown in popularity due to cost reduction and ease of management. The survey also showed that desktop virtualization was preferred and that 50% of companies that had implemented VDI reported that their expectations had been met (Wolf & Halter, 2008). Regarding the benefits of VDI implementation, 54% of respondents quoted reduced costs, 54% cited better management, and 52% vouched for flexibility and the ability to provision client services and PCs with centrally located software (Wolf & Halter, 2008). The need to invest in a good hypervisor was also identified by Tulloch (2010), and Dittner & Rule (2007). Examples of hypervisors include VMware ESX, Citrix XEN Server, and Microsoft Hyper-V. Most of these hypervisors use Microsoft’s Remote Data protocol (RDP) for handling client-server connections while the rest use proprietary data compression and optimization protocols and techniques (Tulloch, 2010).
In conclusion, it is evident that virtualization technologies have evolved over the years into the systems we use today. In particular, it has been observed that desktop virtualization offers many benefits to an organization, but also has its drawbacks. It is therefore important that organizations understand their goals and objectives before deciding the technology to implement either as a way of cost reduction, to make system administration easier, to achieve user flexibility or to follow trends and keep up with the competition.
- Delap, S. (2008). An Introduction to Virtualization. InfoQ. Retrieved 30 March 2015, from http://www.infoq.com/articles/virtualization-intro
- Dittner, R., & Rule, D. (2007). Best damn server virtualization book period. Burlington, MA: Syngress.
- Petrović. & Fertalj, K. (2009). Demystifying desktop virtualization. In Proceedings of the 9th WSEAS international conference on Applied computer science (pp. 241-246). Stevens Point, Wisconsin, USA: World Scientific and Engineering Academy and Society (WSEAS).
- Tulloch, M. (2010). Understanding Microsoft Virtualization solutions. Redmond, WA: Microsoft Press.
- Wolf, C., & Halter, E. (2008). Virtualization: From the Desktop to the Enterprise. Berkeley, CA: A press.