Openshift 4 provided great metrics monitor suites, we can use them to monitor and create dashboard for both infra and application layer services; What’s more we can leverage that to manage external services and infra as well.
The benefits why I want to address this issue are:
- Establish and maintaining a sophisticated monitoring solution can be very hard and time-consuming.
- The monitoring components are almost coming free with embedded in Openshift. They are managed by Openshift Cluster Monitoring operator which is stable, self-healing and well-structured.
- Manage and scale infra configuration like monitoring components in OpenShift is easier than doing it in VM, because it’s declarative nature and we can even manage them by gitops.
- Of course if you have both OpenShift Platform to manage container world and VM world to manage other business, you don’t have to manage two complicated Monitoring solutions for time and cost saving.
In this article I’m going to cover the following topic in order to demonstrate how to monitor external services on OpenShift Monitoring Component:
- Exporting Quarkus applications via MicroProfile metrics
- Configuring OpenShift to enable user-define monitoring components
- Proxy external service in Openshift via Kubernetes service
- Create My Grafana Instance to host application dashboard for external service metrics
- (Optional) utilize node exporter to monitor external infra metrics
Although the issue I am addressing is targeted to external services, you can surely apply the same steps for your applications hosted in OpenShift.
Exporting Quarkus applications by MicroProfile metrics
In this lab I’m going to use the Quarkus framework to demonstrate the application metric monitoring, the application example is a quarkus-todo-application in my github repo.
Quarkus application support 2 ways to export metrics
- MicroProfile metrics
- Micrometer metrics
I’m using the MicroProfile metric.
The key part of exporting metrics in a quarkus application are:
Build and Deploy
If you are familar with build and dev concepts, you can directly get binary image in my quay.io repo.
Here are steps to deploy the Todo service on a virtual machine.
My target VM IP is
192.168.2.10 OS: rhel7
#Execute on my laptop
mvn quarkus:dev#Test my metrics is actual exporting
curl http://localhost:8080/metrics|grep Prime
application_io_quarkus_sample_PrimeNumberChecker_checksTimer_rate_per_second gaugeapplication_io_quarkus_sample_PrimeNumberChecker_checksTimer_rate_per_second 9...#Build container images and push into a remote repo, you need to change to your accessible repo
podman build -f src/main/docker/Dockerfile.ubi . -t quay.io/rzhang/quarkus:todo-app-jvm-11-nodb# log into my target virtual machine 192.168.2.10
podman run -d --name todo-app -p 8080:8080 quay.io/rzhang/quarkus:todo-app-jvm-11-nodbcurl http://192.168.2.10:8080/metrics|grep Prime
application_io_quarkus_sample_PrimeNumberChecker_checksTimer_rate_per_second gaugeapplication_io_quarkus_sample_PrimeNumberChecker_checksTimer_rate_per_second 9...
OK, the application is deployed in a VM and metrics have been exported at: http://192.168.2.10:8080/metrics
Let’s see how to configure OpenShift to collect these metrics.
Configuring OpenShift to enable user-defined monitoring components
First let’s have a briefing on how Monitoring components are organized in OpenShift.
User-Defined Project’s metric would be managed by right part of Monitoring components(ie. openshift-user-workload-monitoring).
On the left side, all OpenShift Infra related metrics would be managed by openshift-monitoring namespaces, so they are well separated and have different storage and components for Platform and User-defined .
There are few components which are shared between Platform and User-Defined monitoring components.
Thanos Querier — Aggregates both Platform metrics and User-Defined and provide unique query interface for searching metrics; In the next part of Grafana configuration, we would use the Thanos Querier URL to display the metrics;
AlertManager — Receiving alert form both sides of Prometheus and sending alert to external system;
Cluster Monitoring Operator- Provides easy install and maintenance of monitoring components;
The left part is installed out of box if you installed OpenShift 4.
However the right part would not be installed by default. We would utilize the User defined monitoring components to manage the external services metrics. And the steps are the same if you want to manage your own application metrics which run in OpenShift instead of outside.
To enable user-workload-monitoing:
oc apply -f cluster-monitoring-config.yaml
After enabling this, you can see that a new namespace and related monitoring component would be created in this new namespaces.
>oc projects|grep monitoringopenshift-monitoringopenshift-user-workload-monitoring>oc get po -n openshift-user-workload-monitoringNAME READY STATUS RESTARTS AGEprometheus-operator-6bf7fbbbdd-m8fsl 2/2 Running 0 9dprometheus-user-workload-0 5/5 Running 0 23dprometheus-user-workload-1 5/5 Running 0 23dthanos-ruler-user-workload-0 3/3 Running 1 23dthanos-ruler-user-workload-1 3/3 Running 1 23d
OK now Openshift User-Defined Monitoring components have been ready.
Let’s move on and create a new namespace to host a proxy to external services metrics.
Proxy external service in Openshift via Kubernetes service
For reader who familiar with the following concepts, you can just check my lab code in : https://github.com/ryanzhang/openshift-monitor-external-service
- create a new project to host my proxy and metrics collecting configuration for external metrics and configuration.
>oc new-project external-service
2. Create a Kubernetes service to integrate external service which point to external metric rest endpoints.
Now you can monitor the external metric in embedded dashboard:
First trigger some load via ab tools:
Search metrics in OpenShift integrated UI:
Create Grafana instance to display external service metrics
Wouldn’t it be great if we can view our user-defined metric in grafana.
Although OpenShift has installed Grafana out of box, they are not supposed to be used by user-defined workloads. Currently there is no way to edit and add a user-defined dashboard or create more user-accounts for the default grafana instance.
So let me show you how to install a user-defined grafana to integrate our own metrics.
- Install the grafana operator into my-grafana namespaces
oc new-project my-grafana
2. Create my-grafana instance to host my metrics dashboards
Now you can access my-grafana instance via: https://my-grafana.apps-crc.testing/
Login in with admin/admin
Add my todo-app quarkus MicroProfile Metrics dashboard.
To find more information how to generate MicroProfile Metrics for your application, I recommend you to check this : https://github.com/jamesfalkner/microprofile-grafana
Here we go:
(I also trigger a load testing at the backend to monitoring the metrics)
What’s more, you could monitor external infra metrics by utilizing the node_exporter process.
(Optional) Utilize node exporter to monitor external infra metrics
Please follow prometheus official documentation to install node_exporter to your vm or bare metal, then you would have the metrics available at, for example in my case:
Repeat the above steps, or take a look at the yaml resource in my github repo:
You would get the external infra metrics managed in user-defined monitoring components.
Here is the node_exporter dashboard I used in the following graph.
OpenShift monitoring components are easy to install and operate. I hope it ‘s useful to show you that we can not only integrate the container based workload’s metrics inside OpenShift, but also external services and infra metrics outside OpenShift.