Kubernetes: Suspend CronJobs and terminate current pod without restarting

It’s very easy to get information about how to suspend CronJob on Kubernetes clusters. It just executes the "kubectl patch" command. However, if its job is working at that time, the job will not be stopped and it’s impossible to terminate by "kubectl delete pod" command when the restartPolicy was set to "OnFailure" because even though the suspend flag modified to "true", a new pod will be created right after deleting. There’s no command or options to do that in kubectl command but here is the work around which I came across.

First I apply the following a test CronJob.

 1apiVersion: batch/v1beta1
 2kind: CronJob
 3metadata:
 4  name: cronjob-test
 5  namespace: app
 6spec:
 7  schedule: "15 * * * *"
 8  concurrencyPolicy: Forbid
 9  startingDeadlineSeconds: 60
10  successfulJobsHistoryLimit: 5
11  failedJobsHistoryLimit: 5
12  jobTemplate:
13    metadata:
14      name: cronjob-test
15    spec:
16      template:
17        spec:
18          restartPolicy: "OnFailure"
19          containers:
20          - name: cronjob-test
21            image: alpine:latest
22            imagePullPolicy: Always
23            command: ["tail", "-f", "/dev/null"]

And deploy it but I don’t want to modify the schedule manually and don’t want to wait long so I deploy it by the following script which sets the schedule to the next minute automatically.

 1#!/bin/bash
 2
 3# Get current hour and minute string
 4hour=`date '+%-H'`
 5min=`date '+%-M'`
 6
 7# Increment or reset to 0
 8if [ $min -eq 59 ];then
 9    let hour++
10    min=0
11else
12    let min++
13fi
14
15# Deploy
16cat cronjob.yml \
17| sed "s/\(schedule: \).*/\1${min} * * * */g" \
18| kubectl apply -f - || exit 1
19
20printf "Next schedule is %02d:%02d\n" $hour $min

And then, a new pod will be created and it will run continuously. After that, modify the restart_policy to "Never".

1kubectl get cronjobs.batch cronjob-test -o yaml \
2| sed 's/\(restartPolicy: \).*/\1Never/g' \
3| kubectl apply -f -

Then, modify the suspend flag.

1kubectl patch cronjobs $job_name -p '
2{
3    "spec":{
4        "suspend":'true'
5    }
6}'

However, the current working pod is still working. So, it should be terminated but the target is the job not the pod.

1# Get job batch ID which is generated by the CronJob
2job_batch_id=`kubectl get cronjobs.batch $job_name -o json | jq -r '.status.active[0].name'`
3
4# Delete the Job by the above ID
5kubectl delete "job.batch/${job_batch_id}"

After the above, the current working pod will be disappeared and not recreated automatically. Of course, the next scheduled time arrived, a new pod will not be created because the suspend flag is set to "true" now.

When roll back the settings, should to do like this.

 1## suspend flag
 2kubectl patch cronjobs $job_name -p '
 3{
 4    "spec":{
 5        "suspend":false
 6    }
 7}'
 8
 9## restartPolicy
10kubectl get cronjobs.batch cronjob-test -o yaml \
11| sed 's/\(restartPolicy: \).*/\1OnFailure/g' \
12| kubectl apply -f -

And then, when the next scheduled date comes, a new pod will be created.