Skip to main content

How to create an OkHttp interceptor and why do you need one?

An http interceptor can become handy if you need to log all your requests or add a bearer token to every request when calling a protected resource with an access token in OAuth 2.0.

To create an interceptor you basically need to create an OkHttpClient by calling the OkHttpClient Builder and add an interceptor to it.
OkHttpClient okHttpClient = new OkHttpClient.Builder()

You can have access to the current request by calling chain.request(). Once you get the request you can then add or build on top of it by adding an additional header to send your Bearer Token.
Request auth_request = chain.request()

The last step is to return a Response object by calling chain.proceed, passing the new request.
return chain.proceed(auth_request);

And the final code:
OkHttpClient okHttpClient = new OkHttpClient.Builder()
    .addInterceptor(new Interceptor() {
        @Override
        public Response intercept(Chain chain) throws IOException {
            Request auth_request = chain.request()
                    .newBuilder()
                    .addHeader("Authorization", "Bearer " + YOUR_ACCESS_TOKEN)
                    .build();
            return chain.proceed(auth_request);
        }
    }).build();

Retrofit
If you have been using Retrofit as a wrapper on top of OkHttp to make http requests you can easily add your new interceptor to it by calling the client method of the Retrofit Builder like this:
return new Retrofit.Builder()
    .baseUrl(YOUR_BASE_URL)
    .client(okHttpClient)
    .addConverterFactory(GsonConverterFactory.create())
    .build()
    .create(YOUR_RETROFIT_INTERFACE.class);

So now every time you call your Retrofit interface method to make an http request, OkHttp will intercept your call and add a bearer token to your request header.

Comments

Popular posts from this blog

How to use Splunk SPL commands to write better queries - Part I

Introduction As a software engineer, we are quite used to deal with logs in our daily lives, but in addition to ensuring that the necessary logs are being sent by the application itself or through a service mesh, we often have to go a little further and interact with some log tool to extract more meaningful data. This post is inspired by a problem I had to solve for a client who uses Splunk as their main data analysis tool and this is the first in a series of articles where we will delve deeper and learn how to use different Splunk commands. Running Splunk with Docker To run Splunk with docker, just run the following command: docker run -d —rm -p 8000:8000 -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD=SOME_PASSWORD --name splunk splunk/splunk:latest Sample Data We are going to use the sample data provided by Splunk. You can find more information and download the zip file from their web site . How does it work? In order to be able to interact with Splunk t...

How to run OPA in Docker

From the introduction of the openpolicyagent.org site: OPA generates policy decisions by evaluating the query input against policies and data. In this post i am going to show you an easy and fast way to test your policies by running OPA in Docker. First, make sure you have already installed Docker and have it running: docker ps Inside your choosen directory, create two files. One called input.json file for your system representation and one file called example.rego for your rego policy rules. Add the following content to your json file: Add the following content for the example.rego: Each violation block represents the rule that you want to validate your system against. The first violation block checks if any of the system servers have the http protocol in it. If that is the case, the server id is added to the array. In the same way, the second violation block checks for the servers that have the telnet protocol in it and if it finds a match the server id is also...

How to create a REST API Pagination in Spring Boot with Spring HATEOAS using MongoDB

Introduction In this post we are going to see how we can create a REST API pagination in Spring Boot with Spring HATEOAS and Spring Data MongoDB . For basic queries, we can interact with MongoDB using the MongoRepository interface which is what we are going to use in this tutorial. For more advanced operations like update and aggregations we can use the MongoTemplate class. With Spring applications we start adding the needed dependencies to our pom file if using Maven as our build tool. For this project we are going to use the following dependencies: Spring Web , Spring Data MongoDB and Spring HATEOAS . To quickly create your Spring Boot project with all your dependencies you can go to the Spring Initializr web page. This is how your project should look like: As with any MVC application like Spring there are some minimal layers that we need to create in our application in order to make it accessible like the Controller , Service , Model and Repository layers . For this...