Keycloak service level indicators (SLIs)

Learn about the Service Level Indicators to monitor your Keycloak deployment's performance

Service Level Indicators (SLIs) and Service Level Objectives (SLOs) are essential components in monitoring and maintaining the performance and reliability of Keycloak in production environments.

The Google Site Reliability Engineering book defines this as follows:

  • A Service Level Indicator (SLI) is a carefully defined quantitative measure of some aspect of the level of service that is provided.

  • A Service level objective (SLO) is a target value or range of values for a service level that is measured by an SLI.

By agreeing those with the stakeholders and tracking these, service owners can ensure that deployments are aligned with user’s expectations and that they neither over- nor under-deliver on the service they provide.

Prerequisites

  • Metrics need to be enabled for Keycloak, and the http-metrics-slos option needs to be set to latency to be measured for the SLO defined below. Follow Enabling Keycloak Metrics guide for more details.

  • A monitoring system collecting the metrics. The following paragraphs assume Prometheus or a similar system is used that supports the PromQL query language.

Definition of the service delivered

The following service definition is used in the next steps to identify the appropriate SLIs and SLOs. It should capture the behavior observed by its users.

As a Keycloak user,

  • I want to be able to log in,

  • refresh my token and

  • log out,

so that I can use the applications that use Keycloak for authentication.

Definition of SLI and SLO

The following provides example SLIs and SLOs based on the service description above and the metrics available in Keycloak.

While these SLOs are independent of the actual load the system, this is expected as a single user does not care about the system load if they get slow responses.

At the same time, if you enter a Service Level Agreement (SLA) with stakeholders, you as the one running Keycloak have an interest to define limits of the traffic Keycloak receives, as response times will be prolonged and error rates might increase as the load of the system increases and scaling thresholds are reached.

Characteristic Service Level Indicator Service Level Objective* Metric Source

Availability

Percentage of the time Keycloak is able to answer requests as measured by the monitoring system

Keycloak should be available 99.9% of the time within a month (44 minutes unavailability per month).

Use the Prometheus up metric which indicates if the Prometheus server is able to scrape metrics from the Keycloak instances.

Latency

Response time for authentication related HTTP requests as measured by the server

95% of all authentication related requests should be faster than 250 ms within a 5-minute-range.

Keycloak server-side metrics to track latency for specific endpoints along with Response Time Distribution using http_server_requests_seconds_bucket and http_server_requests_seconds_count.

Errors

Failed authentication requests due to server problems as measured by the server

The rate of errors due to server problems for authentication requests should be less than 0.1% within a 5-minute-range.

Identify server side error by filtering the metric http_server_requests_seconds_count on the tag outcome for value SERVER_ERROR.

* These SLO target values are an example and should be tailored to fit your use case and deployment.

PromQL queries

These are example queries created in a Kubernetes environment and are used with Prometheus as a monitoring tool. They are provided as blueprints, and you will need to adapt them for a different runtime or monitoring environment.

For a production environment, you might want to replace those queries or subqueries with a recording rule to make sure they do not use too many resources if you want to use them for alerting or live dashboards.

Availability

This metric will have a value of at least one if the Keycloak instances is available and responding to Prometheus scrape requests, and 0 if the service is down or unreachable.

Then use a tool like Grafana to show a 30-day time range and let it calculate the average of the metric in that time window.

count_over_time(
  sum (up{
    container="keycloak", (1)
    namespace="$namespace"
  } > 0)[30d:15s]
) (2)
/
count_over_time(vector(1)[30d:15s]) (3)
1 Filter by additional tags to identify Keycloak nodes
2 Count all data points in the given range and interval when at least one Keycloak node was available
3 Divide by the number of all data points in the same range and interval
In Grafana you can replace value 30d:15s with $range:$interval to compute availability SLI in the time range selected for the dashboard.

Latency of authentication requests

This Prometheus query calculates the percentage of authentication requests that completed within 0.25 seconds relative to all authentication requests for specific Keycloak endpoints, targeting a particular namespace and pod, over the past 5 minutes.

This example requires the Keycloak configuration http-metrics-slos to contain value 250 indicating that buckets for requests faster and slower than 250 ms should be recorded. Setting http-metrics-histograms-enabled to true would capture additional buckets which can help with performance troubleshooting.

sum(
  rate(
    http_server_requests_seconds_bucket{
      uri=~"/realms/{realm}/protocol/{protocol}.*|/realms/{realm}/login-actions.*", (1)
      le="0.25", (2)
      container="keycloak", (3)
      namespace="$namespace"}
    [5m] (4)
  )
) without (le,uri,status,outcome,method,pod,instance) (5)
/
sum(
  rate(
    http_server_requests_seconds_count{
      uri=~"/realms/{realm}/protocol/{protocol}.*|/realms/{realm}/login-actions.*", (1)
      container="keycloak",
      namespace="$namespace"}
    [5m] (3)
  )
) without (le,uri,status,outcome,method,pod,instance) (5)
1 URLs related to logging in
2 Response time as defined by SLO
3 Filter by additional tags to identify Keycloak nodes
4 Time range as specified by the SLO
5 Ignore as many labels necessary to create a single sum
In Grafana you can replace value 5m with $__range to compute latency SLI in the time range selected for the dashboard.

Errors for authentication requests

This Prometheus query calculates the percentage of authentication requests that returned a server side error for all authentication requests, targeting a particular namespace, over the past 5 minutes.

sum(
  rate(
    http_server_requests_seconds_count{
      uri=~"/realms/{realm}/protocol/{protocol}.*|/realms/{realm}/login-actions.*", (1)
      outcome="SERVER_ERROR", (2)
      container="keycloak", (3)
      namespace="$namespace"}
    [5m] (4)
  )
) without (le,uri,status,outcome,method,pod,instance) (5)
/
sum(
  rate(
    http_server_requests_seconds_count{
      uri=~"/realms/{realm}/protocol/{protocol}.*|/realms/{realm}/login-actions.*", (1)
      container="keycloak", (3)
      namespace="$namespace"}
    [5m] (4)
  )
) without (le,uri,status,outcome,method,pod,instance) (5)
1 URLs related to logging in
2 Filter for all requests that responded with a server error (HTTP status 5xx)
3 Filter by additional tags to identify Keycloak nodes
4 Time range as specified by the SLO
5 Ignore as many labels necessary to create a single sum
On this page