Home / What are Containers?

What are Containers?

containers

This guide examines the role of containers in cloud computing, highlights key benefits, and takes a look at the growing ecosystem of related technologies, including Docker, Kubernetes, Istio, and Knative.

Containers are executable software units in which application code, as well as its libraries and dependencies, are packaged in standard ways so that they can be run anywhere, whether on a desktop, in traditional IT, or in the cloud.

Containers do this by utilizing a type of operating system (OS) virtualization in which features of the OS (in the case of the Linux kernel, the namespaces and cgroups primitives) are used to both isolate processes and control the amount of CPU, memory, and disk access that those processes have.

Containers are small, fast, and portable because, unlike virtual machines, they do not require a guest OS in every instance and can instead rely on the host OS’s features and resources.

Containers first appeared decades ago in the form of FreeBSD Jails and AIX Workload Partitions, but the introduction of Docker in 2013 is remembered by most modern developers as the start of the modern container era.

Containers vs. Virtual machines

Understanding how a container differs from a traditional virtual machine is one way to gain a better understanding of it (VM). A hypervisor is used to virtualize physical hardware in traditional virtualization, whether on-premises or in the cloud. A guest OS, a virtual copy of the hardware that the OS requires to run, as well as an application and its associated libraries and dependencies, are all included in each VM.

Containers virtualize the operating system (typically Linux) rather than the underlying hardware, so each container only contains the application and its libraries and dependencies. Containers are so light and portable because they don’t have a guest operating system.

Benefits of Containers

Containers have a number of advantages over virtual machines, the most important of which is that they provide a level of abstraction that makes them lightweight and portable.

LightWeight

Containers share the machine’s OS kernel, removing the need for a separate OS instance for each application and making container files small and resource-friendly. They can spin up quickly and better support cloud-native applications that scale horizontally due to their smaller size, especially when compared to virtual machines.

Platform Independent and Portable

Containers carry all of their dependencies with them, which means that software can be written once and run across laptops, cloud, and on-premises computing environments without needing to be re-configured.

Supports Modern Architecture

Containers are an ideal fit for modern development and application patterns that are built on regular code deployments in small increments, such as DevOps, serverless, and microservices, due to their deployment portability/consistency across platforms and their small size.

Improves Utilization

Containers, like virtual machines before them, allow developers and operators to increase the CPU and memory utilization of physical machines. Containers go even further because they enable microservice architectures, allowing application components to be deployed and scaled more granularly, which is a more appealing option than scaling up an entire monolithic application because a single component is experiencing load issues.

Application of Containers

Containers are becoming more popular, particularly in cloud environments. Containers are being considered by many organizations as a replacement for virtual machines (VMs) as the general-purpose compute platform for their applications and workloads. However, there are a few key use cases where containers are particularly useful within that broad scope.

Microservices

Containers are small and lightweight, making them a good fit for microservice architectures, in which applications are made up of many loosely coupled, independently deployable smaller services.

DevOps

Many teams that embrace DevOps as the way they build, ship, and run software use a combination of microservices as an architecture and containers as a platform.

Multi-Cloud

Containers are an ideal underlying architecture for hybrid cloud and multi-cloud scenarios where organizations find themselves operating across a mix of multiple public clouds in addition to their own data center because they can run consistently anywhere, across laptops, on-premises, and cloud environments.

Application Modernization

Containerizing applications so that they can be migrated to the cloud is one of the most common approaches to application modernization.

Container orchestration with Kubernetes

The simplicity of the individual container collided with the complexity of managing hundreds (even thousands) of containers across a distributed system as companies began to embrace containers—often as part of modern, cloud-native architectures.

Container orchestration emerged as a solution to this problem, allowing large volumes of containers to be managed throughout their lifecycle.

  • Provisioning
  • Redundancy
  • Health monitoring
  • Resource allocation
  • Scaling and load balancing
  • Moving between physical hosts

While many container orchestration platforms were created to help address these challenges (such as Apache Mesos, Nomad, and Docker Swarm), Kubernetes, an open source project launched by Google in 2014, quickly became the most popular container orchestration platform, and it is the one that the majority of the industry has standardized on.

Kubernetes allows developers and operators to declare a desired state for their overall container environment using YAML files, and Kubernetes then takes care of the rest, including deploying a specified number of instances of a given application or workload, rebooting that application if it fails, load balancing, auto-scaling, zero-downtime deployments, and more.

The Cloud Native Computing Foundation (CNCF), a vendor-agnostic industry group under the Linux Foundation’s umbrella, now manages Kubernetes.

Istio and Knative

The ecosystem of tools and projects designed to harden and expand production use cases continues to grow as containers gain traction as a popular way to package and run applications. Istio and Knative, in addition to Kubernetes, are two of the most popular projects in the container ecosystem.

Istio

When developers use containers to build and run microservice architectures, management concerns extend beyond individual container lifecycle considerations to how large groups of small services—often referred to as a “service mesh”—connect with and relate to one another. Istio was created to make it easier for developers to deal with issues like discovery, traffic, monitoring, security, and other issues.

Knative

Serverless architectures are also gaining traction, particularly among the cloud-native community. The ability to deploy containerized services as serverless functions is a key feature of Knative.

A serverless function can “scale to zero,” which means it is not running at all unless it is called upon, rather than running all the time and responding when needed (as a server does). When applied to tens of thousands of containers, this model can save enormous amounts of computing power.

Leave a Reply