docs: add a kube cmd to delete all pods in failed/evicted status #43
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
π¨βπ» Changes proposed
Added a Kubernetes command to delete
evicted
andfailed
pods!Eviction in Kubernetes: Understanding the Process
As Kubernetes continues to grow in popularity, it's important to have a clear understanding of how it handles resource management. One aspect of this is the eviction process, which occurs when a Pod assigned to a Node needs to be terminated.
What is Eviction in Kubernetes ?
Eviction is a process that occurs when a Pod assigned to a Node is asked to terminate. There are several reasons why this might happen, but the most common is preemption. Preemption occurs when Kubernetes needs to schedule a new Pod on a Node that has limited resources. In order to free up resources for the new Pod, another Pod needs to be terminated.
Other reasons why a Pod might be evicted include Node maintenance, resource usage violations, and Pod lifecycle events such as Job or DaemonSet completion. Regardless of the reason, eviction is a necessary aspect of resource management in Kubernetes.
Understanding Preemption in Kubernetes
Preemption is the most common reason for eviction in Kubernetes.
In Kubernetes, each Node has a set of resources, including
CPU
andmemory
, that can be allocated to Pods. When a new Pod is scheduled on a Node, Kubernetes needs to ensure that there are enough resources available to meet the Pod's requirements. If there are not enough resources, Kubernetes will try to free up resources by evicting one or more existing Pods.When a Pod is selected for eviction, Kubernetes will follow a set of rules to determine which Pod to terminate. These rules are based on the Pod's priority and the resources it's using. Pods with lower priority and higher resource usage are more likely to be selected for eviction.
Once a Pod has been selected for eviction, Kubernetes will send a termination signal to the Pod's containers. The containers will then have a configurable amount of time to shut down gracefully before they are forcefully terminated. Kubernetes will also mark the Node as unschedulable during this time to prevent new Pods from being scheduled on it.
I was managing a cluster where my pods keep being evicted due to wrong configurations, and once a pod is evicted, it's kept in the node list when you use
kubectl get pods
. So I find this command helpful to clean my Node if it's happening. :)What is Field Selectors on Kubernetes?
From
K8S documentation
:βοΈ Check List (Check all the applicable boxes)
π Note to reviewers
π· Screenshots