Skip to main content

How to add Distributed Tracing to a Quarkus application with Jaeger

If you are running your applications on a microservices architecture you might want to trace which instances are getting called, how often and which one may have performance issues. In this post we are going to add a distributed tracing to monitor our Quarkus application so we can easily identify which instances are being called.

In this post we will see how to log all received requests by adding Jaeger as the Distributed Tracing system of an existing Quarkus application.

First we need to add the smallrye-opentracing extension to our project by running the Quarkus add-extension command:

./mvnw quarkus:add-extension -Dextensions="smallrye-opentracing"

Hopefully, you should get a success message printed out in your terminal telling you that your extension has been installed and as usual we now need to import our new maven dependency into the project.

The next thing we need to do is configure our application properties file to set, among other things, the name of the service for the Jaeger UI in which all of our logs will be saved to.

quarkus.jaeger.service-name=myservice
quarkus.jaeger.sampler-type=const
quarkus.jaeger.sampler-param=1
quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, parentId=%X{parentId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n

Before running our Quarkus application, lets run Jaeger in Docker so we can test if all of our requests are being logged appropriately.

$ docker run -d --name jaeger \
-e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 14250:14250 \
-p 9411:9411 \
jaegertracing/all-in-one:1.22
Start Quarkus and access your application endpoint so Quarkus can send logs of the request to Jaeger:
./mvnw quarkus:dev
To access the Jaeger UI go to: http://your-host:16686 and select the name of your service in the dropdown menu. You can also specify a different host where you have Jaeger running like this:
quarkus.jaeger.endpoint=http://your-host:14268/api/traces
If everything went well you should be able to select your service name from the Jaeger UI:
In the next post we will add fault tolerance to our quarkus application!

Comments

Popular posts from this blog

How to run OPA in Docker

From the introduction of the openpolicyagent.org site: OPA generates policy decisions by evaluating the query input against policies and data. In this post i am going to show you an easy and fast way to test your policies by running OPA in Docker. First, make sure you have already installed Docker and have it running: docker ps Inside your choosen directory, create two files. One called input.json file for your system representation and one file called example.rego for your rego policy rules. Add the following content to your json file: Add the following content for the example.rego: Each violation block represents the rule that you want to validate your system against. The first violation block checks if any of the system servers have the http protocol in it. If that is the case, the server id is added to the array. In the same way, the second violation block checks for the servers that have the telnet protocol in it and if it finds a match the server id is also...

How to use Splunk SPL commands to write better queries - Part I

Introduction As a software engineer, we are quite used to deal with logs in our daily lives, but in addition to ensuring that the necessary logs are being sent by the application itself or through a service mesh, we often have to go a little further and interact with some log tool to extract more meaningful data. This post is inspired by a problem I had to solve for a client who uses Splunk as their main data analysis tool and this is the first in a series of articles where we will delve deeper and learn how to use different Splunk commands. Running Splunk with Docker To run Splunk with docker, just run the following command: docker run -d —rm -p 8000:8000 -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD=SOME_PASSWORD --name splunk splunk/splunk:latest Sample Data We are going to use the sample data provided by Splunk. You can find more information and download the zip file from their web site . How does it work? In order to be able to interact with Splunk t...

Log Aggregation with ELK stack and Spring Boot

Introduction In order to be able to search our logs based on a key/value pattern, we need to prepare our application to log and send information in a structured way to our log aggregation tool. In this article I am going to show you how to send structured log to ElasticSearch using Logstash as a data pipeline tool and how to visualize and filter log information using Kibana. According to a definition from the Wikipedia website: Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. According to Elasticsearch platform website , Elasticsearch is the heart of the Elastic stack, which centrally stores your data for lightning fast search. The use of Elasticsearch, Kibana, Beats and Logstash as a search platform is commonly known as the ELK stack. Next we are going to start up Elasticsearch, Kibana and Logstash using docker so we can better underst...