-->

Monitoring a kubernetes job

2019-07-08 10:50发布

问题:

I have kubernetes jobs that takes variable amount of time to complete. Between 4 to 8 minutes. Is there any way i can know when a job have completed, rather than waiting for 8 minutes assuming worst case. I have a test case that does the following:

1) Submits the kubernetes job.
2) Waits for its completion.
3) Checks whether the job has had the expected affect.

Problem is that in my java test that submits the deployment job in the kubernetes, I am waiting for 8 minutes even if the job has taken less than that to complete, as i dont have a way to monitor the status of the job from the java test.

回答1:

<kube master>/apis/batch/v1/namespaces/default/jobs 

endpoint lists status of the jobs. I have parsed this json and retrieved the name of the latest running job that starts with "deploy...".

Then we can hit

<kube master>/apis/batch/v1/namespaces/default/jobs/<job name retrieved above>

And monitor the status field value which is as below when the job succeeds

"status": {
    "conditions": [
      {
        "type": "Complete",
        "status": "True",
        "lastProbeTime": "2016-09-22T13:59:03Z",
        "lastTransitionTime": "2016-09-22T13:59:03Z"
      }
    ],
    "startTime": "2016-09-22T13:56:42Z",
    "completionTime": "2016-09-22T13:59:03Z",
    "succeeded": 1
  }

So we keep polling this endpoint till it completes. Hope this helps someone.



回答2:

Since you said Java; you can use kubernetes java bindings from fabric8 to start the job and add a watcher:

KubernetesClient k = ...
k.extensions().jobs().load(yaml).watch (new Watcher <Job>() {

  @Override
  public void onClose (KubernetesClientException e) {}

  @Override
  public void eventReceived (Action a, Job j) {
    if(j.getStatus().getSucceeded()>0)
      System.out.println("At least one job attempt succeeded");
    if(j.getStatus().getFailed()>0)
      System.out.println("At least one job attempt failed");
  }
});


回答3:

I found that the JobStatus does not get updated while polling using job.getStatus() Even if the status changes while checking from the command prompt using kubectl.

To get around this, I reload the job handler:

    client.extensions().jobs()
                       .inNamespace(myJob.getMetadata().getNamespace())
                       .withName(myJob.getMetadata().getName())
                       .get();

My loop to check the job status looks like this:

    KubernetesClient client = new DefaultKubernetesClient(config);
    Job myJob = client.extensions().jobs()
                      .load(new FileInputStream("/path/x.yaml"))
                      .create();
    boolean jobActive = true;
    while(jobActive){
        myJob = client.extensions().jobs()
                .inNamespace(myJob.getMetadata().getNamespace())
                .withName(myJob.getMetadata().getName())
                .get();
        JobStatus myJobStatus = myJob.getStatus();
        System.out.println("==================");
        System.out.println(myJobStatus.toString());

        if(myJob.getStatus().getActive()==null){
            jobActive = false;
        }
        else {
            System.out.println(myJob.getStatus().getActive());
            System.out.println("Sleeping for a minute before polling again!!");
            Thread.sleep(60000);
        }
    }

    System.out.println(myJob.getStatus().toString());

Hope this helps



回答4:

You did not mention what is actually checking the job completion, but instead of waiting blindly and hope for the best you should keep polling the job status inside a loop until it becomes "Completed".



回答5:

I don't know what kind of tasks are you talking about but let's assume you are running some pods

you can do

watch 'kubectl get pods | grep <name of the pod>'

or

kubectl get pods -w

It will not be the full name of course as most of the time the pods get random names if you are running nginx replica or deployment your pods will end up with something like nginx-1696122428-ftjvy so you will want to do

watch 'kubectl get pods | grep nginx'

You can replace the pods with whatever job you are doing i.e (rc,svc,deployments....)



回答6:

You can use NewSharedInformer method to watch the jobs' statuses. Not sure how to write it in Java, here's the golang example to get your job list periodically:

    type ClientImpl struct {
        clients *kubernetes.Clientset
    }

    type JobListFunc func() ([]batchv1.Job, error)

var (
    jobsSelector = labels.SelectorFromSet(labels.Set(map[string]string{"job_label": "my_label"})).String()
)


    func (c *ClientImpl) NewJobSharedInformer(resyncPeriod time.Duration) JobListFunc {
        var once sync.Once
        var jobListFunc JobListFunc

        once.Do(
            func() {
                restClient := c.clients.BatchV1().RESTClient()
                optionsModifer := func(options *metav1.ListOptions) {
                    options.LabelSelector = jobsSelector
                }
                watchList := cache.NewFilteredListWatchFromClient(restClient, "jobs", metav1.NamespaceAll, optionsModifer)
                informer := cache.NewSharedInformer(watchList, &batchv1.Job{}, resyncPeriod)

                go informer.Run(context.Background().Done())

                jobListFunc = JobListFunc(func() (jobs []batchv1.Job, err error) {
                    for _, c := range informer.GetStore().List() {
                        jobs = append(jobs, *(c.(*batchv1.Job)))
                    }
                    return jobs, nil
                })
            })

        return jobListFunc
    }

Then in your monitor you can check the status by ranging the job list:

 func syncJobStatus() {
    jobs, err := jobListFunc()
    if err != nil {
        log.Errorf("Failed to list jobs: %v", err)
        return
    }

    // TODO: other code

    for _, job := range jobs {
        name := job.Name
        // check status...
    }
}