Many of today’s cutting-edge technologies, such as cloud computing, edge computing, and microservices, owe their inception to the virtual machine concept, which separates operating systems and software instances from the underlying physical computer.
What is a virtual machine?
A virtual machine (VM) is software that runs programs or applications without being tied to a physical machine. In a virtual machine instance, one or more guest machines can run on a host computer.
Each virtual machine has its own operating system and operates separately from other virtual machines, even if they are on the same physical host. Virtual machines typically run on servers, but they can also run on desktop systems or even embedded platforms. Multiple virtual machines can share resources from a physical host, including CPU cycles, network bandwidth, and memory.
Virtual machines have their origins in the early days of computing in the 1960s, when time-sharing was used for mainframe users to separate software from a physical host system. A virtual machine was defined in the early 1970s as “an efficient and isolated duplicate of a real computer machine.”
Virtual machines as we know them today have gained traction over the past 20 years as enterprises adopted server virtualization to utilize the computing power of their physical servers more efficiently, reducing the number of physical servers and saving data center space. Because applications with different operating system requirements could run on a single physical host, no separate server hardware was required for each.
How do virtual machines work?
In general, there are two types of virtual machines: compute virtual machines, which
separate a single process, and system virtual machines, which provide complete separation of the operating system and applications from the physical computer. Examples of compute virtual machines include the Java virtual machine, the .NET Framework, and the Parrot virtual machine.
System virtual machines rely on hypervisors as an intermediary that give software access to hardware resources. The hypervisor emulates the computer’s CPU, memory, hard disk, network, and other hardware resources, creating a pool of resources that can be allocated to individual virtual machines according to their specific requirements. The hypervisor can support multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to run Linux and Windows Server operating systems on the same physical host.
Big names in the hypervisor space include VMware (ESX/ESXi), Intel/Linux Foundation (Xen), Oracle (MV Server for SPARC and Oracle VM Server for x86), and Microsoft (Hyper-V).
Desktop computing systems can also use virtual machines. An example here would be a Mac user running a virtual instance of Windows on their physical Mac hardware.
What are the two types of hypervisors?
The hypervisor manages the resources and allocates them to the virtual machines. It also schedules and adjusts how resources are distributed based on how the hypervisor and virtual machines have been configured, and can reallocate resources as demands fluctuate. Most hypervisors fall into one of two categories:
- Type 1. A bare metal hypervisor runs directly on the physical host machine and has direct access to its hardware. Type 1 hypervisors generally run on servers and are considered more efficient and better performing than Type 2 hypervisors, making them well suited for server, desktop, and application virtualization. Examples of Type 1 hypervisors include Microsoft Hyper-V and VMware ESXi.
- Type 2. Sometimes called a hosted hypervisor, a Type 2 hypervisor is installed on top of the host machine’s operating system, which manages calls to hardware resources. Type 2 hypervisors are typically deployed on end-user systems for specific use cases. For example, a developer might use a Type 2 hypervisor to create a specific environment for building an application, or a data analyst might use it to test an application in an isolated environment. Examples include VMware Workstation and Oracle VirtualBox.
What are the advantages of virtual machines?
Because the software is separate from the physical host computer, users can run multiple instances of the operating system on a single piece of hardware, saving the company time, management costs, and physical space. Another advantage is that virtual machines can support legacy applications, reducing or eliminating the need and cost of migrating an older application to an updated or different operating system.
In addition, developers use virtual machines to test applications in a secure, sandboxed environment. Developers looking to see if their applications will work on a new operating system can use virtual machines to test their software instead of buying the new hardware and operating system ahead of time. For example, Microsoft recently updated its free Windows virtual machines that allow developers to download an evaluation virtual machine running Windows 11 to test the operating system without upgrading a primary computer.
This can also help isolate malware that could infect a given virtual machine instance. Because software inside a virtual machine cannot alter the host computer, malicious software cannot spread as much damage.
What are the disadvantages of virtual machines?
Virtual machines have some disadvantages. Running multiple virtual machines on a physical host can cause unstable performance, especially if the infrastructure requirements for a given application are not met. This also makes them less efficient in many cases compared to a physical computer.
And if the physical server fails, all applications running on it will go down. Most IT stores use a balance between physical and virtual systems.
What are some other forms of virtualization?
The success of virtual machines in server virtualization led to virtualization being applied to other areas, such as storage, networking, and desktops. Most likely, if there is a type of hardware that is being used in the data center, the concept of virtualizing it (e.g., application delivery controllers) is being explored.
In
network virtualization, enterprises have explored network-as-a-service options and network functions virtualization (NFV), which uses commodity servers to replace specialized network devices to enable more flexible and scalable services. This differs somewhat from software-defined networking, which separates the network control plane from the forwarding plane to allow for more automated provisioning and policy-based management of network resources. A third technology, virtual network functions, are software-based services that can run in an NFV environment, including processes such as routing, firewall, load balancing, WAN acceleration, and encryption.
Verizon, for example, uses NFV to power its virtual network services that allow customers to activate new services and capabilities on demand. Services include virtual appliances, routing, software-defined WAN, WAN optimization, and even Session Border Controller as a Service (SBCaaS) to centrally manage and securely deploy IP-based real-time services such as VoIP and unified communications.
Virtual
machines and containers
The growth of virtual machines has led to further development of technologies such as containers, which take the concept a step further and is gaining appeal among web application developers. In a container configuration, a single application can be virtualized along with its dependencies. With much less overhead than a virtual machine, a container includes only binaries, libraries, and applications.
While some think that container development can kill the virtual machine, there are enough capabilities and benefits of virtual machines that keep the technology moving forward. For example, virtual machines are still useful when running multiple applications together or when running legacy applications on older operating systems.
In addition, some consider containers
to be less secure than VM hypervisors because containers only have one operating system that applications share, while virtual machines can isolate the application and operating system.
Gary Chen, research manager for IDC’s Software-Defined Compute division, said the VM software market remains a core technology, even as customers explore cloud architectures and containers. “The virtual machine software market has been remarkably resilient and will continue to grow positively over the next five years, despite being very mature and approaching saturation,” Chen writes in IDC’s Virtual Machine Software World Forecast, 2019-2022.
Virtual machines
, 5G and edge computing
are considered part of new technologies, such as 5G and edge computing. For example, virtual desktop infrastructure (VDI) vendors such as Microsoft, VMware, and Citrix are looking for ways to extend their VDI systems to employees now working at home as part of a post-COVID hybrid model.
“With VDI, you need extremely low latency because you’re sending your keystrokes and mouse movements basically to a remote desktop,” says Mahadev Satyanarayanan, a professor of computer science at Carnegie Mellon University. In 2009, Satyanarayanan wrote about how virtual machine-based clouds could be used to provide better processing capabilities to mobile devices at the edge of the Internet, which led to the development of edge computing.
In the 5G wireless space, the network segmentation process uses software-defined networking and NFV technologies to help install network functionality in virtual machines on a virtualized server to provide services that previously only ran on proprietary hardware.
Like many other technologies in use today, these emerging innovations would not have been developed had it not been for the original VM concepts introduced decades ago.
Keith Shaw is a freelance digital journalist who has been writing about the IT world for over 20 years.