<infinispan>
<cache-container statistics="true">
<metrics gauges="true" histograms="false" name-as-tags="true" />
</cache-container>
</infinispan>
This is part of the Metrics for troubleshooting Keycloak deployment guide.
Infinispan exposes metrics in the endpoint /metrics
.
By default, they are enabled.
We recommend enabling the attribute name-as-tags
as it makes the metrics name independent on the cache name.
To configure metrics in the Infinispan server, just enabled as shown in the XML below.
<infinispan>
<cache-container statistics="true">
<metrics gauges="true" histograms="false" name-as-tags="true" />
</cache-container>
</infinispan>
Using the Infinispan Operator in Kubernetes, metrics can be enabled by using a ConfigMap
with a custom configuration.
It is shown below an example.
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-config
data:
infinispan-config.yaml: >
infinispan:
cacheContainer:
metrics:
gauges: true
namesAsTags: true
histograms: false
apiVersion: infinispan.org/v1
kind: Infinispan
metadata:
name: infinispan
annotations:
infinispan.org/monitoring: 'true' (1)
spec:
configMapName: "cluster-config" (2)
1 | Enables monitoring for the deployment |
2 | Sets the ConfigMap name with the custom configuration. |
Additional information can be found in the Infinispan documentation and Infinispan operator documentation.
This section describes metrics that are useful for monitoring the communication between Infinispan nodes to identify possible network issues.
Global tags
cluster=<name>
The cluster name. If metrics from multiple clusters are being collected, this tag helps identify where they belong to.
node=<node>
The name of the node reporting the metric.
The following metrics expose the response time for the remote requests. The response time is measured between two nodes and includes the processing time. All requests are measured by these metrics, and the response time should remain stable through the cluster lifecycle.
In a healthy cluster, the response time will remain stable. An increase in response time may indicate a degraded cluster or a node under heavy load. |
Tags
node=<node>
It identifies the sender node.
target_node=<node>
It identifies the receiver node.
Metric | Description |
---|---|
|
The number of synchronous requests to a receiver node. |
|
The total duration of synchronous request to a receiver node |
When histogram is enabled, the percentile buckets are available. Those are useful to create heat maps but, collecting and exposing the percentile buckets may have a negative impact on the deployment performance. |
All the bytes received and sent by the Infinispan are collected by these metrics. Also, all the internal messages, as heartbeats, are counted too. They allow computing the bandwidth currently used by each node.
The metric name depends on the JGroups transport protocol in use. |
Metric | Protocol | Description |
---|---|---|
|
|
The total number of bytes received by a node. |
|
|
|
|
|
|
|
|
The total number of bytes sent by a node. |
|
|
|
|
|
Monitoring the thread pool size is a good indicator that a node is under a heavy load. All requests received are added to the thread pool for processing and, when it is full, the request is discarded. A retransmission mechanism ensures a reliable communication with an increase of resource usage.
In a healthy cluster, the thread pool should never be closer to its maximum size (by default, 200 threads).
|
Thread pool metrics are not available with virtual threads. Virtual threads are enabled by default when running with OpenJDK 21. |
The metric name depends on the JGroups transport protocol in use. The default transport protocol is TCP. |
Metric | Protocol | Description |
---|---|---|
|
|
Current number of threads in the thread pool. |
|
|
|
|
|
|
|
|
The largest number of threads that have ever simultaneously been in the pool. |
|
|
|
|
|
Flow control takes care of adjusting the rate of a message sender to the rate of the slowest receiver over time. This is implemented through a credit-based system, where each sender decrements its credits when sending. The sender blocks when the credits fall below 0, and only resumes sending messages when it receives a replenishment message from the receivers.
The metrics below show the number of blocked messages and the average blocking time. When a value is different from zero, it may signal that a receiver is overloaded and may degrade the cluster performance.
Each node has two independent flow control protocols, UFC
for unicast messages and MFC
for multicast messages.
A healthy cluster shows a value of zero for all metrics. |
Metric | Description |
---|---|
|
The number of times flow control blocks the sender for unicast messages. |
|
Average time blocked (in ms) in flow control when trying to send an unicast message. |
|
The number of times flow control blocks the sender for multicast messages. |
|
Average time blocked (in ms) in flow control when trying to send a multicast message. |
JGroups provides reliable delivery of messages. When a message is dropped on the network, or the receiver cannot handle the message, a retransmission is required. Retransmissions increase resource usage, and it is usually a signal of an overload system.
Random Early Drop (RED) monitors the sender queues. When the queues are almost full, the message is dropped, and a retransmission must happen. It prevents threads from being blocked by a full sender queue.
A healthy cluster shows a value of zero for all metrics. |
Metric | Description |
---|---|
|
The number of retransmitted messages. |
|
The total number of dropped messages by the sender. |
|
Percentage of all messages that were dropped by the sender. |
The cluster size metric reports the number of nodes present in the cluster. If it differs, it may signal that a node is joining, shutdown or, in the worst case, a network partition is happening.
A healthy cluster shows the same value in all nodes. |
Metric | Description |
---|---|
|
The number of nodes in the cluster. |
The cross-site status reports connection status to the other site.
It returns a value of 1
if is online or 0
if offline.
The value of 2
is used on nodes where the status is unknown; not all nodes establish connections to the remote sites and do not contain this information.
A healthy cluster shows a value greater than zero. |
Metric | Description |
---|---|
|
The single site status (1 if online). |
Tags
site=<name>
The name of the destination site.
Network partitions in a cluster can happen due to various reasons. This metrics does not help predict network splits but signals that it happened, and the cluster has been merged.
A healthy cluster shows a value of zero for this metric. |
Metric | Description |
---|---|
|
The amount of time a network split was detected and healed. |
The metrics in this section help monitoring the Infinispan caches health and the cluster replication.
Global tags
cache=<name>
The cache name.
Monitor the number of entries in your cache using these two metrics. If the cache is clustered, each entry has an owner node and zero or more backup copies of different nodes.
Sum the unique entry size metric to get a cluster total number of entries. |
Metric | Description |
---|---|
|
The approximate number of entries stored by the node, including backup copies. |
|
The approximate number of entries stored by the node, excluding backup copies. |
The following metrics monitor the cache accesses, such as the reads, writes and their duration.
A store operation is a write operation that writes or updates a value stored in the cache.
Metric | Description |
---|---|
|
The total number of store requests. |
|
The total duration of all store requests. |
When histogram is enabled, the percentile buckets are available. Those are useful to create heat maps but, collecting and exposing the percentile buckets may have a negative impact on the deployment performance. |
A read operation reads a value from the cache. It divides into two groups, a hit if a value is found, and a miss if not found.
Metric | Description |
---|---|
|
The total number of read hits requests. |
|
The total duration of all read hits requests. |
|
The total number of read misses requests. |
|
The total duration of all read misses requests. |
When histogram is enabled, the percentile buckets are available. Those are useful to create heat maps but, collecting and exposing the percentile buckets may have a negative impact on the deployment performance. |
A remove operation removes a value from the cache. It divides in two groups, a hit if a value exists, and a miss if the value does not exist.
Metric | Description |
---|---|
|
The total number of remove hits requests. |
|
The total duration of all remove hits requests. |
|
The total number of remove misses requests. |
|
The total duration of all remove misses requests. |
When histogram is enabled, the percentile buckets are available. Those are useful to create heat maps but, collecting and exposing the percentile buckets may have a negative impact on the deployment performance. |
Write and remove operations hold the lock until the value is replicated in the local cluster and to the remote site.
On a healthy cluster, the number of locks held should remain constant, but deadlocks may create temporary spikes. |
Metric | Description |
---|---|
|
The number of locks currently being held by this node. |
Transactional caches use both One-Phase-Commit and Two-Phase-Commit protocols to complete a transaction. These metrics keep track of the operation duration.
The PESSMISTIC locking mode uses One-Phase-Commit and does not create commit requests.
|
In a healthy cluster, the number of rollbacks should remain zero. Deadlocks should be rare, but they increase the number of rollbacks. |
Metric | Description |
---|---|
|
The total number of prepare requests. |
|
The total duration of all prepare requests. |
|
The total number of rollback requests. |
|
The total duration of all rollback requests. |
|
The total number of commit requests. |
|
The total duration of all commit requests. |
When histogram is enabled, the percentile buckets are available. Those are useful to create heat maps but, collecting and exposing the percentile buckets may have a negative impact on the deployment performance. |
State transfer happens when a node joins or leaves the cluster. It is required to balance the data stored and guarantee the desired number of copies.
This operation increases the resource usage, and it will affect negatively the overall performance.
Metric | Description |
---|---|
|
The number of in-flight transactional segments the local node requested from other nodes. |
|
The number of in-flight segments the local node requested from other nodes. |
The cluster data replication can be the main source of failure. These metrics not only report the response time, i.e., the time it takes to replicate an update, but also the failures.
On a healthy cluster, the average replication time will be stable or with little variance. The number of failures should not increase. |
Metric | Description |
---|---|
|
The total number of successful replications. |
|
The total number of failed replications. |
|
The average time spent, in milliseconds, replicating data in the cluster. |
Success ratio
An expression can be used to compute the replication success ratio:
(vendor_rpc_manager_replication_count) / (vendor_rpc_manager_replication_count + vendor_rpc_manager_replication_failures)
Like cluster data replication, the metrics in this section measure the time it takes to replicate the data to the other sites.
On a healthy cluster, the average cross-site replication time will be stable or with little variance. |
Tags
site=<name>
indicates the receiving site.
Metric | Description |
---|---|
|
The total number of cross-site requests. |
|
The total duration of all cross-site requests. |
|
The total number of cross-site requests. This metric is more detailed with a per-site counter. |
|
The total duration of all cross-site requests. This metric is more detailed with a per-site duration. |
|
The total number of cross-site requests handled by this node. This metric is more detailed with a per-site counter. |
|
The site status.
A value of 1 indicates that it is online.
This value reacts to the Infinispan CLI commands |
When histogram is enabled, the percentile buckets are available. Those are useful to create heat maps but, collecting and exposing the percentile buckets may have a negative impact on the deployment performance. |
Return back to the Metrics for troubleshooting Keycloak deployment.