A Comprehensive Guide to Cloud Native Technologies

Cloud native technologies come in a variety of shapes and sizes. They all share certain key traits, such as the use of loosely coupled architectures and the ability to run in distributed environments. Beyond that, however, cloud native technologies work in different ways and solve different challenges.

For that reason, mastering cloud native computing requires familiarizing yourself with a number of platforms and tools. This article walks through five such cloud native DevOps technologies, explaining how they work and why you may or may not want to include them in your cloud native strategy.

1. Container Runtimes

A container runtime is the software that executes containers. As such, container runtimes are one of the fundamental components of any cloud native environment that includes containers.
User-added image
Source

That said, it’s important not to confuse container runtimes with other parts of a containerized software stack. Container runtimes are only one of several technologies or resources required to run containers. Others include container images (which contain that code that is executed by a container runtime) and container registries (which store container images), container orchestrators (which manage containers).

There are a number of container runtimes available today. Popular examples include:

  • Docker Engine
  • Containerd
  • LXC
  • Runc

Any runtime that is compatible with the runtime specification of the Open Container Initiative (OCI), a community group that defines standards for container technology, will work with any container images that also comply with the OCI standards. This means that, in general, the container runtime you use will not affect which applications you can run.

On the other hand, the tooling for runtimes varies. As a result, the way you actually execute a container will depend on which runtime you are using. It may also be impacted by which orchestrator you are running, if you are using an orchestrator.

For example, if you use Docker as your container runtime, and you are not using a separate orchestration system, you can run containers with a simple docker start command:
docker start container_name

Alternatively, if you are using runc, you could start a container using the runc CLI tool:

runc run containerid

Or, if you are using LXC:

lxc-start container

If you deploy containers with an orchestration system like Kubernetes, you typically would not start containers directly from the command line by interacting with the runtime. Instead, you would configure a container RuntimeClass in Kubernetes, which tells Kubernetes which runtime to use when it starts Pods. For example:

# RuntimeClass is defined in the node.k8s.io API group
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
# The name the RuntimeClass will be referenced by.
# RuntimeClass is a non-namespaced resource.
name: myclass
# The name of the corresponding CRI configuration
handler: myconfiguration

You can then specify a RuntimeClass for each Pod:

apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
runtimeClassName: myclass
# …

When you configure a RuntimeClass for a Pod, Kubernetes will automatically execute that Pod using the runtime you specified (assuming the runtime is installed on your nodes). You don’t need to start the container manually, or even think much about the runtime, once you configure a RuntimeClass. Kubernetes manages the runtime “under the hood,” so to speak.

2. Kubernetes

Like container runtimes, Kubernetes goes hand-in-hand with cloud native computing. Since Kubernetes is an open source container orchestration platform, it manages where containers run within a cluster of servers. It also attempts to restart failed containers, move containers between servers to keep loads balanced, and so on.

User-added image
​​​​​Source

Kubernetes is not the only orchestration tool available, There are alternatives, such as Docker Swarm and HashiCorp Nomad. Kubernetes, however, is by far and away the most popular container orchestration solution. If you use containers to build a cloud native environment, you’ll likely be using Kubernetes to manage them.

The way you use Kubernetes will depend, in part, on which distribution you use. In general, there are three types of Kubernetes distributions to choose from:

  • Managed Kubernetes services that run in the cloud, such as AWS EKS and Azure AKS.
  • Kubernetes distributions that can be installed anywhere, but that require more management effort on the part of users, such as Rancher or OpenShift.
  • “Lightweight” Kubernetes distributions that are designed to run well on small devices, like laptops.

Using a managed service is the easiest way to get started with Kubernetes because you don’t need to provide your own hardware, and the software is preinstalled for you within a SaaS environment. That said, managed Kubernetes services provide less control than self-managed distributions.

As for lightweight distributions, those are good solutions if you’re looking for a simple way to test out Kubernetes on your laptop or PC. For example, you can run Minikube, a lightweight distribution developed by the Kubernetes project, with a few simple commands:

wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start

At this point, you’ll have a single-node Kubernetes cluster running directly on your local system. You can interact with it using kubectl. Minikube isn’t designed for production deployments, but it’s a handy tool if you just want to play around with Kubernetes by interacting with nodes or deploying applications.

3. Serverless Computing

Serverless computing is a cloud native technology that lets you execute applications on demand, without having to install them on a server in the traditional sense.

Serverless computing is beneficial for two main reasons:

  • It saves time and effort because it eliminates the need for engineers to install applications or configure application hosting environments.
  • It can save money because you only pay for the time your code is actually running. With conventional cloud computing services, you have to pay for the total time your server is up, whether or not you have any applications handling requests during all of that time.

There are a variety of serverless computing services available today. Most run as managed services in public cloud environments, but you can also set up a serverless environment either on-premises or on self-managed cloud infrastructure using a platform like OpenWhisk or OpenFaaS.

Probably the most popular serverless technology today is AWS Lambda, which is part of the Amazon cloud. Because Lambda includes both host hardware and the serverless execution software, using the service is as simple as following a few steps:

  1. Upload your application code (which is known as a serverless function) in the AWS Console.
  2. In the Console, create a function based on the code you uploaded.
  3. Configure triggers, which define when the function should be run.

User-added image
Source

A major limitation of serverless functions compared to other cloud native technologies is that it is often difficult to migrate functions from one serverless computing platform (like AWS Lambda) to another (like Azure Functions). One way to address this challenge is to use a common packaging format on a platform like Artifactory, then deploy functions from there to different serverless compute engines as needed.

4. Load Balancing

Most cloud native hosting environments include multiple servers, with workloads spread across them. In a distributed environment like this, you need a way to determine which application requests should be directed at which servers. You want to balance the application load in such a way that it is distributed evenly across your servers, and avoid having one server become overwhelmed while other servers remain under-utilized.

Load balancers address this need. Load balancers automatically determine how to distribute application requests across a cluster of servers.

There are many types of load balancers available. Some are designed for specific types of load balancing, like load balancing for websites. Others are general-purpose load balancers. In addition, some load balancers, like AWS ELB, are built into public cloud environments, while others are standalone solutions.

A popular general-purpose, infrastructure-agnostic load balancer is NGINX. To use it, you first need to install NGINX on a device that will serve as the interface between your server cluster and external application requests.

Then, create a server group definition that corresponds to the cluster of servers for which you want NGINX to provide load balancing. For example:

http {
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com;
server 192.0.0.1 backup;
}
}

You then need to tell NGINX how to balance the load. In this example, we use the “least connections” technique, which tells NGINX to direct requests to servers that currently have the fewest active connections:

upstream backend {
least_conn;
server backend1.example.com;
server backend2.example.com;
}

With this configuration in place, NGINX will continuously track the number of open connections on each server in the cluster and send each incoming request to the server that has the lowest number of connections at the time the request appears.

5. Event Streaming

Event streaming is the process of ingesting sets of continuously updated data and taking actions based on it. In the context of cloud native computing, event streams can be useful for a variety of tasks, such as scaling server infrastructure up or down or invoking serverless functions.

Apache Kafka is a widely used open source event streaming platform. Kafka lets you expose events as a data stream. To use it, first install it using a package manager, or by downloading the source with:

wget https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz
tar -xzf kafka_2.13-3.1.0.tgz
cd kafka_2.13-3.1.0

Then, run it with:

bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties #run this command in a separate terminal.

You next have to define a “topic,” which is a set of events that you want Kafka to record. For example:

bin/kafka-topics.sh –create –topic quickstart-events –bootstrap-server localhost:9092

Then, populate your topic with events. You can enter events manually by running:

bin/kafka-console-producer.sh –topic quickstart-events –bootstrap-server localhost:9092

Of course, in most production environments you wouldn’t enter event data manually. You’d instead use a tool like Kafka Connect, which automatically ingests events from sources like databases or application metrics.

The multiple approaches to cloud native computing

There is no specific set of technologies that you need to use to build a cloud native environment. You could run cloud native apps using containers, serverless functions, event streams, or a combination of all of these. You could use an orchestrator like Kubernetes, or you could choose to orchestrate your applications manually (although that is not practiced in large-scale environments). And you could deploy applications across a cluster of servers, using a load balancer like NGINX to manage incoming requests, but load balancing is not strictly necessary for cloud native computing.

All of the above is to say that there are many approaches to cloud native computing. The best tools and technologies for you depend on factors like the types of applications you are deploying, whether you want to manage your own infrastructure and how much control you want over your cloud native environment.