What is kubectl restart pod?
kubectl is the command-line tool in Kubernetes that allows you to run commands on Kubernetes clusters, deploy, and modify cluster resources.
Containers and pods don’t always end when an application fails. In such cases, you must explicitly restart the Kubernetes pods. There is no such command “kubectl restart pod”, but there are a few ways to accomplish this using other kubectl commands.
We’ll describe the pod restart policy, which is part of a Kubernetes pod template, and then show how to manually restart a pod with kubectl.
This is part of a series of articles about the Kubectl cheat sheet.
Kubernetes pod
restart policy
Each Kubernetes pod follows a defined lifecycle. It starts in the “pending” phase and moves to “running” if one or more of the parent containers started successfully. It then moves on to the “successful” or “failed” phase based on the success or failure of the containers in the capsule.
While the pod is running, the kubelet can restart each container to handle certain errors. Within the pod, Kubernetes tracks the state of the various containers and determines the actions needed to return the pod to a healthy state.
A pod cannot repair itself: if the node where the pod is programmed fails, Kubernetes will delete the pod. Similarly, capsules cannot survive evictions resulting from lack of resources or to maintain the node. Kubernetes uses a controller that provides a high-level abstraction to manage pod instances.
You can control the restart policy of a container through restartPolicy of the specification at the same level that defines the container
: apiVersion: batch/v1 kind: Job metadata: name: demo-restartPolicy-job spec: backoffLimit: 2 template: metadata: name: demo-restartPolicy-pod spec: containers: – name: demo image: sonarsource/sonar-scanner-cli restartPolicy: Never
The restart policy is defined at the same level as containers applied at the pod level. You can set the policy to one of three options
: Always: The
- pod must always be running, so Kubernetes creates a new container every time an existing one ends
- OnFailure: The container only restarts if it exits with a return code other than 0. Containers that return 0 (OK) do not require a restart.
- Never: The container does not restart.
.
If you do not explicitly set a value, the kubelet will use the default settings (always). Remember that the restart policy only refers to container reboots by the kubelet on a specific node.
If a container continues to fail, the kubelet will delay reboots with exponential rollbacks, i.e. a delay of 10 seconds, 20 seconds, 40 seconds, etc., up to 5 minutes. After a container has been running for ten minutes, the kubelet will reset the backspace timer for the container.
You may need to restart
a pod for the following reasons:
- unapproved resource usage or unexpected software behavior: for example, if a 600 Mi memory container tries to allocate more memory, the pod will end up with an out-of-memory (OOM) error. In this case, you must restart the pod after changing the resource specification.
- A pod stuck in a shutdown state: This issue occurs when a pod is still working when all of its containers are finished. It is usually the result of a cluster node shutting down unexpectedly and the cluster controller or scheduler is unable to clean the pods on the node.
- Errors: You may need to terminate pods with irreparable errors.
- Timeouts: The pod has exceeded the scheduled time.
- Request persistent volume unavailable: The pod may not function as expected.
4 ways to restart Kubernetes pods using kubectl It is possible to restart Docker containers
with the following command: Docker
reboot
container_id
However, there is no equivalent command to
restart pods
in Kubernetes, especially if there is no designated YAML file. The alternative is to use kubectl commands to restart Kubernetes pods.
The kubectl
set env
command
One way is to change the number of pod replicas that need to be restarted through the
kubectl scale command. To restart a Kubernetes pod by using the scale command: Use the following command to set the number of pod replicas to 0:kubectl scale deployment demo-deployment -replicas=0 The command
- will disable the Kubernetes pod.
- to a number greater than zero and enable it:kubectl scale deployment demo-deployment -replicas=1The command creates new replicas of the pod that the command previous destroyed. However, the new replicas will have different names.
- Use the following command to check the status and new names of replicas:kubectl get pods
Use the following command to set the number of replicas
The Kubectl Delete Pod command To restart the
Kubernetes
pods with the rollout restart command: Use the following command to restart the
pod:
kubectl rollout restart deployment demo-deployment -n demo-namespace
The command
instructs the controller to delete the pods one by one. It then uses ReplicaSet and scales new pods. This process continues until all new pods are newer than existing ones when the controller resumes.
The Kubectl
Rollout Restart command
To restart Kubernetes pods
with the delete command:
Use the following command to
delete the API pod object:
kubectl delete pod demo_pod -n demo_namespace
Because the Kubernetes API is declarative, deleting the pod object contradicts the expected one. Therefore, the capsule is recreated to maintain consistency with the expected.
The above command can restart only one pod at a time. To restart multiple pods, use the following command:
kubectl delete replicaset demo_replicaset -n demo_namespace
The
above command deletes the entire ReplicaSet of pods and recreates them, effectively restarting each one. The
Kubectl Scale Replicas
command
A different approach to restarting Kubernetes pods is to update their environment variables. The capsules are then automatically restarted once the process is done.
To restart Kubernetes pods using
the set env command: Use the following command to set the environment variable
- :kubectl set env deployment nginx-deployment DATE=$()The above command sets the DATE environment variable to null. Pods are restarted as soon as the deployment is updated.
- Use the following command to retrieve information about the pods and make sure they are running: kubectl get pods. The command will show that the old pods now have a status showing Finishing and the new ones showing Running.
- Run the following command to verify that the DATEenvironment variable was updated:kubectl describes
Troubleshooting Kubernetes
with Komodor
The process of troubleshooting in Kubernetes is complex, and without the right tools, it can be stressful, inefficient, and time-consuming. Some best practices can help minimize the chances of things breaking, but eventually something will go wrong, simply because it can.
This is why we created Komodor, a tool that helps development and operations teams stop wasting their precious time looking for needles in hay piles whenever things go wrong.
Acting as a single source of truth (SSOT) for all your k8s troubleshooting needs, Komodor offers
:
- Change intelligence: Every problem is the result of change. In a matter of seconds we can help you understand exactly who did what and when.
- In-depth visibility: A complete upstream timeline, showing all code and configuration changes, deployments, alerts, code differences, pod logs, and more. All within a glass panel with easy to break down options.
- Information about service dependencies: An easy way to understand changes between services and visualize their ripple effects throughout the system.
- Seamless notifications: Direct integration with your existing communication channels (e.g. Slack) so you have all the information you need, when you need it.
If you’re interested in watching Komodor, use this link to sign up for a free trial.