Introduction
Getting started with containerization and Kubernetes can be daunting; however, understanding how to manage container lifecycles is critical for sustaining long-running services. This guide will help you ensure that your containers keep running perpetually on Kubernetes.
Understanding Kubernetes Objects
Before we dive into running containers forever, let’s understand the Kubernetes objects that allow us to define the lifecycle of containers. There are primarily two objects you’ll work with:
- Pods: The smallest deployable units in Kubernetes that can contain one or more containers.
- Deployments: They help you declare specs for running Pods and ensure that specified number of Pods are running and self-heal in case of failures.
Using the –restart=Never floag
Containers are designed to run until they complete their tasks or encounter an error. However, sometimes you may want to run a container indefinitely for troubleshooting or testing purposes.
One way to run a container forever in Kubernetes is to use the kubectl run
command with the --restart=Neve
r flag and a command that loops indefinitely. For example, you can run the following command to create a pod named test with an Ubuntu image that executes an infinite loop:
kubectl run test --image=ubuntu:latest --restart=Never -- /bin/bash -c "while true; do sleep 30; done
This will create a pod that will not exit unless you delete it manually. You can also use the-it
flag to attach to the pod interactively and run commands inside the container. For example:
kubectl run test --image=ubuntu:latest --restart=Never -it -- /bin/bash
This will create a pod and open a bash shell inside the container. You can exit the shell by typing exit
or pressing Ctrl+D
.
Setting restartPolicy
to Always
Another way to run a container relentlessly in Kubernetes is to use a deployment or a statefulset with a replicas
value of 1 and a restartPolicy
of Always
. This will ensure that the container is always running and will be restarted if it crashes or stops. You can also use a liveness probe to check the health of the container and restart it if it fails. For example, you can create a deployment named test
with an Ubuntu image that executes an infinite loop and a liveness probe that runs every 10 seconds:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: ubuntu:latest
command: ["/bin/bash", "-c", "while true; do sleep 30; done"]
livenessProbe:
exec:
command:
- /bin/bash
- -c
- echo "I am alive"
initialDelaySeconds: 5
periodSeconds: 10
This will create a deployment that will keep the container running and restart it if the liveness probe fails. You can use the kubectl exec
command to run commands inside the container. For example:
kubectl exec -it test-<pod-name> -- /bin/bash
This will open a bash shell inside the container. You can exit the shell by typing exit
or pressing Ctrl+D
.
Keep a Pod Running Forever
Running a single pod forever is straightforward, as Kubernetes inherently tries to keep the Pods running. Here’s the simplest Pod specification to achieve that:
apiVersion: v1
kind: Pod
metadata:
name: forever-pod
spec:
containers:
- name: nginx-container
image: nginx
Run this with kubectl apply -f pod.yml
. This command creates a pod with an Nginx container which, by default, K8s tries to keep alive indefinitely unless manually stopped or deleted.
Utilize Deployments for Self Healing
While a single pod can run indefinitely, using a Deployment ensures even better uptime. Here’s a basic example of a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: forever-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
Apply it using kubectl apply -f deployment.yml
. This Deployment keeps a single replica of the pod with Nginx always running. If the pod fails, the Deployment ensures it is replaced.
Liveness and Readiness Probes
Understanding and using liveness and readiness probes is crucial for perpetuity. A liveness probe signals if your application is alive, while a readiness probe shows if it’s ready to serve traffic.
Here’s how you can add liveness and readiness probes:
spec:
containers:
- name: app-container
image: myapp:latest
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Secret of Replicas
Utilizing the power of replicas in Deployments is a fundamental step towards running containers without any downtime. By specifying more than one replica in a Deployment, you ensure high availability:
spec:
replicas: 3
This ensures that at any given time, even if a pod fails, you have others to take over without interruption.
Advanced Scheduling and Affinity
Advanced use cases may require scheduling and affinity rules that ensure pods remain on the appropriate worker nodes to ensure maximum uptime. An example of inter-pod affinity:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-app
topologyKey: kubernetes.io/hostname
Autoscaling for Load Handling
Lastly, addressing fluctuating workloads and avoiding downtime means configuring autoscaling.
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: webapp-scaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webapp
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
Conclusion
To sum up, running containers forever in Kubernetes demands attention to definitions and an inherent understanding of Pods, Deployments, probes, replicas, and autoscalers. Through thoughtful configuration, reliability and resilience are achievable, underscoring the robustness of Kubernetes as a container orchestration platform.