Why did we need Containers when we had Virtual Machines?

Virtual machines (VM) and containers are both virtualization technologies that differ in how the virtualization is performed.

Virtualization

Traditionally, computers were bound to run one environment over a single OS and infrastructure, but this would result in the under-utilization of resources. To counter this problem and optimize the use of resources in hand, virtualization was introduced, first through virtual machines (VM) and later in the shape of containers. Virtualization is an abstraction that enables the user to run multiple environments on a single piece of OS and infrastructure.

Virtual machines

Virtual machine (VM) is a software that performs virtualization at the hardware level by emulating its computing system. VM provides a strong abstraction and helps run a guest-OS on the host-OS.

VM lives between the OS layer and the infrastructure layer. Each VM has its own underlying OS. The hypervisor, which is a software, firmware, or hardware, sits between the VM and the infrastructure and plays an integral part in the virtualization. The hypervisor allocates the processor, storage, and memory resources between multiple VMs. Each VM has its own libraries, applications, and binaries. For example, Windows host can run a full copy of another OS, e.g., Ubuntu, using a VM.

# Problems with virtual machines

As VMs utilize physical hardware resources to perform their virtualization, VMs do not scale for large-scale deployment of micro-services and data-center technologies. To deploy multiple applications on a single machine using VMs, we need a separate Guest OS for each application or its modular elements. As we know, a Guest OS needs to be allocated RAM and other physical resources. However, many applications are not large enough compared to the memory footprint of each VM, which leads to a huge waste of resources and unused capacity. Moreover, if an application needs to distribute its workload between different VMs, it has to migrate the entire OS with it as well.

Containers

A container is a relatively lightweight virtualization technique that, instead of virtualizing the whole computer system, virtualizes the OS. A container enables users to run isolated servers on a single host OS.

Containers run on top of the OS layer; therefore, all containers use and share the same underlying host OS kernel, including the libraries and binaries, which significantly reduces the memory overhead. The extra libraries and binaries needed can be deployed on the container alongside the application. This process makes containers extremely fast, lightweight, and portable.

The need for Containers

The above properties of containers compared to the resource-intensive VMs, makes them ideal for Micro-services deployment and data-center technologies. Modern applications need faster deployment, greater scalability requirements, and more flexibility in maintaining the application in response to changing client needs. Modern container technologies such as Docker and Kubernetes provides the above guarantees and have become the standard for micro-service deployment.

Moreover, since containers share the same OS, only the single OS needs maintenance and bug fixes, as opposed to doing that for every VM associated with every application.

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved