Kubernetes: How to restart a pod periodically (with examples)

Updated: January 31, 2024 By: Guest Contributor Post a comment

Introduction

Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. A common question for Kubernetes service maintainers is “How can I restart a pod periodically?”. This may be necessary to refresh applications, clear temporary data, or apply new configurations. In this tutorial, we’ll cover various ways to restart Kubernetes pods periodically using simple to advanced examples.

Understanding Pod Lifecycle

Before proceeding, it’s essential to understand the pod lifecycle in Kubernetes. A pod is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers. Pods can be in various states like Pending, Running, Succeeded, Failed, or Unknown. Restarts typically involve stopping a running pod and allowing Kubernetes to recreate it.

CronJobs in Kubernetes

Kubernetes offers a native way to schedule tasks with a time-based job scheduler called a CronJob. We can use a CronJob to delete a pod at regular intervals, thereby triggering a restart if the pod is managed by a controller such as a Deployment or StatefulSet.

Basic CronJob Example

apiVersion: batch/v1
kind: CronJob
metadata:
  name: restart-my-pod
spec:
  schedule: "0 2 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: kubectl-container
            image: bitnami/kubectl:latest
            command:
            - "/bin/bash"
            - "-c"
            - "kubectl delete pod \$(kubectl get pods -l app=my-app -o jsonpath='{.items[0].metadata.name}')"
          restartPolicy: OnFailure

In the above YAML configuration, we’re defining a CronJob called restart-my-pod that runs every day at 0200 hours. It uses a container with the kubectl utility to delete the pod matching the label app=my-app which triggers Kubernetes to create a new instance of the pod.

Using Init Containers for Periodic Restarts

An init container can be used for tasks that need to run before the main container starts. We can leverage this to create a restart mechanism by having the init container sleep for a specified duration and then exit. Since an init container’s exit leads to the restart of the pod, setting up an appropriate sleep duration achieves the periodic restart.

Init Container Example

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod-with-init-restart
spec:
  initContainers:
  - name: init-restart
    image: busybox
    command: ['sh', '-c', 'sleep 86400']
  containers:
  - name: my-app
    image: my-app:latest
    ports:
    - containerPort: 80

The example above creates a pod with an init container that sleeps for 86400 seconds (24 hours) before terminating. After the init container exits, the main container starts. When the main container is ready, Kubernetes marks the pod as Running. When the init container dies, the entire pod restarts, causing a periodic restart every 24 hours.

Advanced Scenario: Kubelet Restart Approach

In an advanced scenario where we have more control over the Kubernetes nodes (e.g., in a self-managed cluster), we can use the Kubelet’s ability to restart pods. This is not a widely encouraged approach as it violates the principle of immutability but can be considered when other methods are not satisfactory.

Kubelet Restart Example

// Typically done via configuration management scripts
echo "*/50 * * * * root systemctl restart kubelet" >> /etc/crontab

Here we’re setting up a Cron job on the Kubernetes node’s host system (outside of Kubernetes itself) to restart the Kubelet service every 50 minutes. When the Kubelet restarts, it effectively restarts all the pods on the node. This approach is very heavy-handed and affects all pods on the node, so it’s less precise and can have wide-reaching implications.

Health Checks for Automatic Restarts

In Kubernetes, we can define liveness and readiness probes for a pod. When a liveness probe fails, Kubernetes will restart the container. We can create a scenario where the liveness probe is programmed to fail after a certain time, thus restarting the pod.

Liveness Probe Example

apiVersion: v1
kind: Pod
metadata:
  name: liveness-restart-pod
spec:
  containers:
  - name: liveness
    image: myapp:latest
    livenessProbe:
      exec:
        command:
        - sh
        - -c
        - 'test \$(expr \$(date +%s) % 86400) -lt 300'
      initialDelaySeconds: 10
      periodSeconds: 60

The above resource definition causes the liveness probe to execute a shell command that checks if the number of seconds in the current day is less than 300 (within the first five minutes of a day). The probe runs every 60 seconds after an initial delay of 10 seconds. Once the probe starts failing after midnight, the pod will restart.

Conclusion

In this tutorial, we have explored various ways to restart Kubernetes pods periodically, ranging from basic legible approaches like CronJobs and init containers, to advanced ones such as tampering with Kubelet configurations. There are numerous strategies to suit different needs and constraints, but it is vital to approach these tasks with an understanding of the trade-offs they entail.

Whatever the chosen method, it is crucial to always consider the impacts on your cluster’s stability and application availability. Ideally, choose strategies that align with Kubernetes principles and your operational requirements.