Avoid using "kubectl rollout restart"

Goal

Extend the disk size of PVC without restarting the Pod.

Issue

The Pod has been restarted.

Story

I manage my Kubernetes workload using code, primarily Helmfile.

I was in the process of extending the disk size for one of my StatefulSet applications. I modified the PersistentVolumeClaim size and waited for the GKE provisioner to extend the disk. Everything was going well.

Now, let's incorporate the change into the StatefulSet code. I removed the current StatefulSet with the --cascade=orphan option (leaving the Pods untouched). Then I made the code change, executed helmfile apply, and... my Pod was restarted!

Why did this happen? It shouldn't have, since nothing had changed.

Hmm, I realized that a few days ago, I restarted my StatefulSet using kubectl rollout restart statefulset....

It turns out that the above command simply adds the annotation:

kubectl.kubernetes.io/restartedAt: <current date&time>

and the rest is handled by Kubernetes, which detects the change and restarts the Pod. However, this annotation remains.

The same thing happened when I applied the StatefulSet to the orphaned Pod. Kubernetes detected that the StatefulSet did not provide the same annotation as the Pod, so it restarted the Pod.

Conclusion

Instead of using kubectl rollout restart, it is recommended to use kubectl delete pod... to restart Pods. This method is safe (refer to https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#delete-pods) unless you have TerminationGracePeriodSeconds > 0.