Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How virtualisation is lifting us to the cloud

David Cauthron | July 14, 2014
Server virtualisation has been a huge win for the data center. Nimboxx CTO David Cauthron explains how the next phase will deliver dramatic benefits in the cloud

Over the past decade, the whole world seems to have embraced virtualization. Is there nothing left to conquer? Hardly. Virtualization technology itself is changing very fast. And the right solutions to address the challenges of legacy application support and migration for modern applications can be tough to find.

This week in the New Tech Forum, David Cauthron, co-founder and CTO of Nimboxx, gives us a bit of virtualization history, how that relates to the current reality of the commodity hypervisor, and his take on where it's all going from here. — Paul Venezia

The hypervisor is a commodity — so where do we go from here?
Virtualizing physical computers is the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multitenancy, and more.

Early virtualization methods were rooted in emulating CPUs, such as the x86 on a PowerPC-based Mac, enabling users to run DOS and Windows. Not only did the CPU need to be emulated, but so did the rest of the hardware environment, including graphics adapters, hard disks, network adapters, memory, and interfaces.

In the late 1990s, VMware introduced a major breakthrough in virtualization, a technology that let the majority of the code execute directly on the CPU without needing to be translated or emulated.

Prior to VMware, two or more operating systems running on the same hardware would simply corrupt each other as they vied for physical resources and attempted to execute privileged instructions. VMware intelligently intercepted these types of instructions, dynamically rewriting the code and storing the new translation for reuse and fast execution.

In combination, these techniques ran much faster than previous emulators and helped define x86 virtualization as we know it today — including the old mainframe concept of the "hypervisor," a platform built to enable IT to create and run virtual machines.

The pivotal change
For years, VMware and its patents ruled the realm of virtualization. On the server side, running on bare metal, VMware's ESX became the leading Type 1 (or native) hypervisor. On the client side, running within an existing desktop operating system, VMware Workstation was among the top "Type 2" (or hosted) hypervisors.

No longer a technology just for developers or cross-platform software usage, virtualization proved itself as a powerful tool to improve efficiency and manageability in data centers by putting servers in fungible virtualized containers.

Over the years, some interesting open source projects emerged, including Xen and QEMU (Quick EMUlator). Neither was as fast or as flexible as VMware, but they set a foundation that would prove worthy down the road.

Around 2005, AMD and Intel created new processor extensions to the x86 architecture that provided hardware assistance for dealing with privileged instructions. Called AMD-V and VT-x by AMD and Intel respectively, these extensions changed the landscape, eventually opening server virtualization to new players. Soon after, Xen leveraged these new extensions to create hardware virtual machines (HVMs) that used the device emulation of QEMU with hardware assistance from the Intel VT-x and AMD-V extensions to support proprietary operating systems like Microsoft Windows.

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.