Other information
More insights into the installation
This section shows the different steps with variants, explain them a bit more.
It also shows the helm upgrade
commands that can update parts of the stack incrementally which helps development and upgrades.
Startup minikube in default mode with a VM. Per default, it uses two CPUs, and this can be adjusted.
minikube start
Start with customized settings.
minikube stop
minikube delete
minikube start --memory 8192 --cpus 4
Depending on the driver, adjusting the settings might work for an already created minikube instance.
minikube stop
minikube config set memory 8192
minikube config set cpus 4
minikube start
Startup minikube on Linux with the Podman driver. This allows faster startup times, less memory usage, and no limitation on CPU usage.
minikube start --driver=kvm2 --container-runtime=cri-o --docker-opt="default-ulimit=nofile=102400:102400"
This requires libvirtd to run.
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
sudo usermod -a -G libvirt $USER
# now relogin, for usermod to become effective
For a lightweight installation that today doesn’t scale beyond 3-5 Keycloak instances:
minikube start --driver=podman --container-runtime=cri-o
On Linux, allow to use Podman and crio via sudo:
-
run
sudo visudo
-
add the following to the sudoer’s file
username ALL=(ALL) NOPASSWD: /usr/bin/podman username ALL=(ALL) NOPASSWD: /usr/bin/crictl
Adding ingress
minikube addons enable ingress
All other installations are scripted using task
.
It runs all tasks in the correct order and in parallel when possible.
If a task definition changes, it runs it again.
Use task -f
to force running all tasks again, for example after you’ve reset minikube.
For more information, look into Automation using the tool task.
Customizing Keycloak
Keycloak is installed with monitoring enabled.
Add local customizations via keycloak/values.yaml
:
-
Set
monitoring
tofalse
to install Keycloak without monitoring options is. -
Set
disableCaches
totrue
to disable caches in the Keycloak’s store.
Pause/Resume setup
The setup can be paused and resumed without restarting or reinstalling all pods.
To stop, run the following command:
minikube stop
To resume, run the following command.
minikube start
After minikube has been re-started, it might have a different IP address for the ingress.
Due to that, all ingresses need to be updated.
Do this, run task
.
Reset the system under test (Keycloak)
This clears the database and restart the Keycloak instance. Once that is complete, it re-initializes the user for Gatling.
task reset-keycloak
Deploying providers to minikube
Keycloak be extended by providers. This is also supported in this setup.
All providers JARs need to be placed in keycloak/providers
.
After updating the files there, run task
.
Keycloak restarts and the providers is then available.
As this uses a ConfigMap to place all information Kubernetes, the combined size of all providers encoded as base64 is 1 MiB.
The dataprovider module is deployed by default.
To test if the dataprovider module has been deployed, test the URL https://keycloak-keycloak.xx.xx.xx.xx.nip.io/realms/master/dataset/status
.
Use the ./isup.sh
script to find out about the IP address of Keycloak.
Running kcadm.sh
with invalid TLS certificates
The minikube setup doesn’t contain trusted TLS certificates, and the certificates also do not match the hostnames.
To disable the TLS checks in Java, see the module provision/tlsdisableagent
for details on how to run for example kcadm.sh
.
Running Gatling
To run the benchmarks using Gatling on your local machine and to forward the metrics to the Graphite exporter in minikube, you’ll need to pass the IP-address of minikube as an environment variable that is then used inside gatling.conf
.
export GRAPHITE_TCP_ADDR=$(minikube ip)
The mapping of Gatling’s metrics to Prometheus a metric name and labels is configured in graphite_mapping.yaml
.
Once the test runs, the metrics are available as gatling_users
and gatling_requests
.
This setup assumes that only one load driver is running.
If more load drivers are running, change the rootPathPrefix
in Gatling’s configuration and the gatling.conf
setup need to change.
For now, this is considered out-of-scope as one Gatling instance can generate several orders of magnitude more load than needed.
The Prometheus Gatling exporter holds the metrics for 5 minutes and then forget them. By that time, Prometheus has already scraped them and stored the values in its database.
Connecting to a remote host running minikube
When running minikube on a remote host, the ports are not accessible remotely from the outside of the host. If they would, this would be a security concern due to the default passwords and sometimes no password being used on the applications deployed on minikube and the Kubernetes API itself.
To connect to Keycloak and other services remotely, one way is to use SSH port forwarding.
As Keycloak is quick specific about the configured port and IP address, the port forwarding needs to bind the same port as on minikube. As it is running on minikube with port 443, this requires running SSH as root so that it can bind port 443 locally.
Given the IP address of minikube on the remote host retrieved by mininkube ip
with content of 192.168.39.19
the following steps work.
Whenever the minikube instance on the remote host is re-created, it receives a different IP address and the commands need to be adjusted. |
-
Add an entry to the local
hosts
file that points the host names of minikube:127.0.0.1 kubebox.192.168.39.19.nip.io grafana.192.168.39.19.nip.io keycloak.192.168.39.19.nip.io
-
Put the current user’s SSH keys in for the root user, so that
sudo ssh
has access to them. -
Run SSH with port forwarding:
sudo ssh -L 443:192.168.39.19:443 user@remotehost
Now point the browser to https://keycloak-keycloak.192.168.39.19.nip.io as usual to interact with the application. With the SSH tunnel in place, the response times are a bit slower, so users are not able to run a representative load test with gatling on their local machine and minikube running on the remote machine.
To optimize the server side of the connection, consider updating the MaxSessions
parameter in sshd, as otherwise the number sessions via one SSH session would be restricted to 10, and users might see a blocking browser.
A recommended number would be 100.