Kubernetes and Openshift

Srasthy Chaudhary
9 min readMar 2, 2021

--

Containers and Virtual Machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.

But what’s the difference between containers and VMs?

Their main differences are in terms of scale and portability.

Containers are typically measured by the megabyte. They don’t package anything bigger than an app and all the files necessary to run and are often used to package single functions that perform specific tasks (known as a microservice). The lightweight nature of containers — and their shared operating system (OS) — makes them very easy to move across multiple environments.

VMs are typically measured by the gigabyte. They usually contain their own OS, allowing them to perform multiple resource-intensive functions at once. The increased resources available to VMs allow them to abstract, split, duplicate, and emulate entire servers, OSs, desktops, databases, and networks.

Traditional IT architectures (monolithic and legacy) keep every aspect of a workload in a single large file type that cannot be split up and so needs to be packaged as a whole unit within a larger environment, often a VM. It was once common to build and run an entire app within a VM, though having all the code and dependencies in one place led to oversized VMs that experienced cascading failures and downtime when pushing updates.

Emerging IT practices (cloud-native development, CI/CD, and DevOps) are possible because workloads are broken into the smallest possible serviceable units possible — usually a function or microservice. These small units are best packaged in containers, which allow multiple teams to work on individual parts of an app or service without interrupting or threatening code packaged in other containers.

So, confused about which one to use?

That depends — do you need a small instance of something that can be moved easily (containers), or do you need a semi-permanent allocation of custom IT resources?

The small, lightweight nature of containers allows them to be moved easily across bare metal systems as well as public, private, hybrid, and multicloud environments. They’re also the ideal environment to deploy today’s cloud-native apps, which are collections of microservices designed to provide a consistent development and automated management experience across public, private, hybrid, and multi-cloud environments. Cloud-native apps help speed up how new apps are built, how existing ones are optimized, how they’re all connected.Compared to VMs, containers are best used to:

  • Build cloud-native apps
  • Package microservices
  • Install DevOps or CI/CD practices
  • Move scalable IT projects across a diverse IT footprint that shares the same OS

VMs are capable of running far more operations than a single container, which is why they are the traditional way monolithic workloads have been (and are still today) packaged. But that expanded functionality makes VMs far less portable because of their dependence on the OS, application, and libraries. Compared to containers, VMs are best used to:

  • House traditional, legacy, and monolithic workloads
  • Isolate risky development cycles
  • Provision of infrastructural resources (such as networks, servers, and data)
  • Run a different OS inside another OS (such as running Unix on Linux)

Virtualization

Software called a hypervisor separates resources from their physical machines so they can be partitioned and dedicated to VMs. When a user issues a VM instruction that requires additional resources from the physical environment, the hypervisor relays the request to the physical system and caches the changes. VMs look and act like physical servers, which can multiply the drawbacks of application dependencies and large OS footprints — a footprint that’s mostly not needed to run a single app or microservice.

Containers

Containers hold a microservice or app and everything it needs to run. Everything within a container is preserved on something called an image — a code-based file that includes all libraries and dependencies. These files can be thought of as a Linux distribution installation because the image comes with RPM packages, and configuration files. Because containers are so small, there are usually hundreds of them loosely coupled together — which is why container orchestration platforms (like OpenShift and Kubernetes) are used to provision and manage them.

Related container technologies must manage far more objects with greater turnover, introducing the need for more automated, policy-driven management.

Many teams are turning to Kubernetes and its rich set of complex features to help them orchestrate and manage containers in production, development, and test environments. Kubernetes has emerged as the de facto standard for container orchestration and management, becoming a critical platform for organizations to understand.

CONTAINER MANAGEMENT AND ORCHESTRATION

Applications are increasingly built as discrete functional parts, each of which can be delivered in a container. That means for every application, there are more parts to manage. Furthermore, containers have shorter life spans than traditional, VM-only deployments.

The complexity of managing applications with more objects and greater churn introduces new challenges: configuration, service discovery, load balancing, resource scaling, and discovering and fixing failures. Managing this complexity manually is impossible. Clusters commonly run more than 1,000 containers; updating these large clusters is infeasible without automation.

Kubernetes delivers production-grade container orchestration by automating container configuration, simplifying scaling, and managing resource allocation. Kubernetes can run anywhere. Whether you want your infrastructure to run on-site, on a public cloud, or on a hybrid configuration of both, Kubernetes delivers at a massive scale.

So, WHAT IS KUBERNETES?

Kubernetes is an open-source container orchestration platform that helps manage distributed, containerized applications at a massive scale.

You tell Kubernetes where you want your software to run, and the platform takes care of virtually everything else.

Kubernetes provides a unified application programming interface (API) to deploy web applications, batch jobs, and databases. Applications in Kubernetes are packaged in containers and are cleanly decoupled from their environment. Kubernetes automates the configuration of your applications and maintains and tracks resource allocation.

BORG, OMEGA, AND THE ORIGIN OF KUBERNETES

Google was one of the first organizations to run containers at a massive scale, starting well before it made Kubernetes open source in 2014. Borg, an internal Google project built to manage long-running services and batch jobs, replacing two separate systems.

Omega focused on a more intelligent scheduler that was needed to handle an increasing number of diverse jobs. Google needed a system to schedule workloads, an increasingly complex task given the volume of applications and long-running services.

With Omega, the scheduler was divided into two separate schedulers with a central shared-state repository to mitigate conflicts. This solution worked but was complex; a better system was needed.

Kubernetes is Google’s third container management system. It is a combination of the approaches used in the monolithic Borg controller and the more flexible Omega controller. Kubernetes was designed to remove complexity and simplify the management of massive infrastructures.

So, Kubernetes is the third generation of the Borg project. The second generation is called Omega, which is the baseline of Apache Mesos.

Google created a broad and intelligent ecosystem of tools and services for:

  • Autoscaling.
  • Self-healing infrastructure.
  • Configuration and updating of batch jobs.
  • Service discovery and load balancing.
  • Application life-cycle management.
  • Quota management.

KUBERNETES AS CONTAINER ORCHESTRATION

Kubernetes is container orchestration- means Kubernetes figures out where and how to run your containers. More explicitly, it provides three primary functions-

Schedulers and scheduling

Service discovery and load balancing

Resource management

Schedulers and scheduling

Schedulers intelligently compare the needs of a container with the health of your cluster, and they suggest where new containers might fit. A controller consults the scheduler and then assigns the work and monitors the container.

Service discovery and load balancing

Kubernetes automatically manages service discovery. We might ask Kubernetes to run a service like a database or a RESTful API. Kubernetes takes notes about these services and can return a list if we ask about them later. Kubernetes also checks the health of individual services. If Kubernetes detects a crash in your service, it will automatically attempt to restart it.

In addition to these basic checks, Kubernetes allows you to add more subtle health checks. For example, perhaps your database has not crashed. But what if it is very slow? Kubernetes can track this and direct traffic to a backup if it detects slowness.

Kubernetes also incorporates load balancing. Modern services scale horizontally — by running duplicates of the service. The load balancer is the key piece that distributes and coordinates traffic across these duplicates. Kubernetes makes it easy to incorporate a custom load-balancing solution like HAProxy or a cloud-provided load balancer from Amazon Web Services, Microsoft Azure, or Google Cloud Platform, as well as for OpenStack.

Resource management

Every computer has a limited amount of central processing unit (CPU) power and memory. Proper resource management is the result of intelligent scheduling. Kubernetes schedules applications to appropriately use resources like CPU power and memory while staying cautious about overutilization that leads to system instability.

THE BENEFITS OF KUBERNETES

Scalability- Kubernetes automatically scales your cluster based on your needs, saving you resources and money

Portability- Kubernetes can run anywhere. It runs on-site in your own datacenter, or in a public cloud. It also runs in a hybrid configuration of both public and private instances. With Kubernetes, the same commands can be used anywhere.

Consistent deployments- Kubernetes deployments are consistent across the infrastructure. Containers embody the concept of immutable infrastructure, and all the dependencies and setup instructions required to run an application are bundled with the container.

Separated and automated operations and development- It is common for operations and development teams to be in contention. Operations teams value stability and are more conservative about change. Development teams value innovation and prize a high change velocity. Kubernetes resolves this conflict.

BASIC KUBERNETES ARCHITECTURE

Kubernetes is a mature platform consisting of hundreds of components.

Kubernetes is software for managing containerized applications on a cluster of servers. These servers are either master or worker nodes. Together they run applications or services.

Control Plane:

The control plane acts as the brain of any Kubernetes cluster. Scheduling, service discovery, load balancing, and resource management capabilities are all provided by the control plane.

API server:

Kubernetes’ API server is the point of contact for any application or service. Any internal or external request goes to the API server. The server determines if the request is valid and forwards the request if the requester has the right level of access.

etcd:

If the control plane is the brain, then etcd is where memories are stored. A Kubernetes server without etcd is like a brain that cannot make memories. As a fault-tolerant, inherently distributed key-value store, etcd is a critical component of Kubernetes. It acts as the ultimate source of truth for any cluster, storing cluster state, and configuration.

Worker nodes:

A worker node in Kubernetes runs an application or service. There are many worker nodes in a cluster, and adding new nodes is how you scale Kubernetes.

Kubelet:

A kubelet is a tiny application that lives on every worker node. The kubelet communicates with the control plane and then performs requested actions on the worker node.

If the control plane is like a brain, a kubelet is like an arm. The control plane sends the command, and the kubelet executes the action.

Container runtime engine:

The container runtime, which complies with standards managed by Open Container Initiative (otherwise known as the OCI specification), runs containerized applications. It is the conduit between a portable container and the underlying Linux kernel.

Missing from Kubernetes

While Kubernetes offers portability, scalability, and automated, policy-driven management to its users, it is an incomplete solution. It does not include all of the components needed to build, run, and scale containers in production, such as the operating system, continuous integration/continuous delivery (CI/CD) tooling, application services, or storage. A large amount of work also needs to be done to set roles, access control, multitenancy, and secure default settings. Kubernetes does provide pluggable interfaces for many of these components and services, offering flexibility and choice for users.

SECURE, SIMPLIFY, AND SCALE KUBERNETES APPLICATIONS WITH RED HAT OPENSHIFT

Red Hat OpenShift is an enterprise Kubernetes application platform. With Red Hat OpenShift, teams gain:

  • An enterprise-grade Kubernetes distribution with hundreds of security, defect, and performance fixes in each release.
  • A single, integrated platform for operations and development teams. Red Hat validates popular storage and networking plug-ins for Kubernetes and includes built-in monitoring, logging, and analytics solutions for IT teams.
  • Red Hat OpenShift offers developers their choice of languages, frameworks, middleware, and databases, along with build and deploy automation through CI/CD to supercharge productivity.

Thank u Mr Amel Mathai and Mr Daleep Bais for clearing all basic concepts and make us understand about industrial use cases of Kubernetes and Openshift.

Thank u Vimal sir and Preeti ma’am for organising such an inspiring and erudite Session.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

No responses yet

Write a response