Liveness and Readiness Probes with Laravel

April 09, 2018 0 Comments

Liveness and Readiness Probes with Laravel

 

 



This post was first published on my blog, to see my posts when I first publish them, check it out!

Liveness and readiness probes are two of the strongest and most useful tools that the Kubernetes suite offers for orchestrating your containers. Kubernetes by itself is very good at rescheduling deployments, restarting pods and, in general, handling extreme situations.
What, with a default configuration, it is lacking is a way of solving problems regarding the state of the applications running in its pods: suppose an application is running in a pod and, due to a deadlock, it is not able to serve any more responses. Your average Kubernetes configuration has no way of detecting such a problem so if you imagine that this is a situation that might happen in one of your application, you will need to set up some tools to avoid it.


Luckily for you, Liveness Probes are designed to solve exactly this problem. Let's see how to implement a simple Liveness Probe and a Readiness Probe in your Laravel application.

First, you will need to set up a health route, a simple route that does nothing but returning a 200 OK status code. You can achieve this by simply adding the following code snippet to your routes.php:

Route::get('/healthz', function () { return 'ok'; 
});

Once you added this simple route to your code, you will need to add the following snippet to your deployment.yml:

livenessProbe: 
httpGet: path: /health port: 80
initialDelaySeconds: 15
failureThreshold: 3
periodSeconds: 3

What the above snippet will do is tell Kubernetes to perform a request every periodSeconds seconds to the specified path. If the response has a status code between 200 and 400 the check will be registered as successful, any other status code will be registered as a failure. After failureThreshold consecutive failed checks, K8s will kill the pod and start up a new one. Finally, the initialDelaySeconds indicates how many seconds Kubernetes will wait before starting to send the Liveness Probe check requests: this will give the application time to start up and be ready to serve requests.

Readiness probes, on the other hand, are useful to disable a pod when it is not able to receive traffic, but its HTTP requests are already working. Suppose an application needs to access a database to return data: we can say that if the application is not able to access the database, it is not ready to serve traffic.

This second type of probes is useful to check exactly that: if the check fails, Kubernetes will mark the probe not ready to accept traffic and it will not be exposed to traffic neither from outside nor from inside the cluster.

Let's see how to implement a readiness probe starting from the requirement that our application needs to access a database in order to be considered "ready". First, let's create a /readiness HTTP endpoint: this route will perform a simple check on the database, if it is able to count the content of a check table in the database it will return a 200 OK status code, otherwise a 500 status code will be returned.

Route::get('/readiness', function () { $check = DB::table('check')->count(); if ($check > 0) { return 'ok'; } return response('', 500); 
});

Once we have a working route, we will need to add the following snippet to our deployment.yml and redeploy our application.

readinessProbe: 
httpGet: path: /readiness port: 80
initialDelaySeconds: 10
successThreshold: 1
failureThreshold: 3
timeoutSeconds: 1
periodSeconds: 15

As you can see, the syntax for Liveness and Readiness probe is quite similar, the only extra field we have here is the successThreshold parameter, which indicates how many successful checks are necessary for the pod to be considered "ready". In the above case, for example, after one successful check the application will be ready and it will start to receive traffic.

As you can see, implementing Liveness and Readiness probes for your application, Laravel or otherwise, is very straight forward: simply add a route which returns a status code between 200 and 400 for a successful check, any other status code for a failure, and add the above snippets to your deployment files and you will be up and running. Clearly, like any other Kubernetes element, Liveness and Readiness probe can be more deeply configured. To explore how to do so, I suggest you check the official documentation, which as always is very thorough.


Tag cloud