Skip to main content

How to create a REST API Pagination in Spring Boot with Spring HATEOAS using MongoDB

Introduction

In this post we are going to see how we can create a REST API pagination in Spring Boot with Spring HATEOAS and Spring Data MongoDB.

For basic queries, we can interact with MongoDB using the MongoRepository interface which is what we are going to use in this tutorial. For more advanced operations like update and aggregations we can use the MongoTemplate class.

With Spring applications we start adding the needed dependencies to our pom file if using Maven as our build tool. For this project we are going to use the following dependencies: Spring Web, Spring Data MongoDB and Spring HATEOAS. To quickly create your Spring Boot project with all your dependencies you can go to the Spring Initializr web page. This is how your project should look like:

As with any MVC application like Spring there are some minimal layers that we need to create in our application in order to make it accessible like the Controller, Service, Model and Repository layers. For this project we will create the following packages: model, repository, service, controller and exception.

Creating the Model

Inside your model package, add a new Java class for our model called Product.java with the following properties. A Product object will represent a document in our MongoDB database or an item in our Mongo collection.

The @Document annotation is used to set the collection name for the MongoDB. When we start our application a new collection of name products will be craeted if one does not exist in our MongoDB.

Creating the Repository

Since we already have our model we can now create our repository layer by creating the ProductRepository.java class with the following code. You can now see in the line number 9 of the code and better understand why we had to create our Product model first. When creating our repository interface we need to provide the model and the object type used for its ID. In our model we are using the String type for our id property.

The PagingAndSortingRepository interface

Since the PagingAndSortingRepository interface already provide a findAll method that receives a Pageable object we do not even need to create any method at all if we do not need to use any other query.

We can also see that the interface also have a findAll method that receives a Sort object. Instead of using this method we will see how we can pass a Sort object as a parameter using the PageRequest object.

Creating the Service

To make our application work, we still need to create more two java classes for the controller and service package. Since the service is the one responsible for calling our repository interface, lets create it next. Inside your package create a new Java class called ProductService.java with the following code.

Note that since PageRequest implements the Pageable interface we can provide it to the findAll method from the PagingAndSortingRepository interface. Here we are using the static method of, that require a page and a size parameter of type int. We can also see that the findAll method returns a Page object. From the Page object we can get the total number of pages and the result from the API request. Lets see how to do that in our controller.

Creating the Controller

The controller is the class that will handle the request to your endpoint. We can see that our controller have a GetMapping with a products path, so our api will answer at /products.

If you do not need to support HATEOAS you could have stopped there and use the Page getContent method to return your response to the browser. Since we are going to do that we are using the PagedModel object to create a representation of our pageable collection.

Creating the Custom Exception

For our controller class we also need the code for our custom PageNotFoundException class:

Running the application

Lets first start our local MongoDB using the docker command bellow:

docker run --rm --name mongodb -p 27017:27017 -v "C:\Users\carlos\Downloads\pagination\mongodb":/data/db -d mongo

Here we are creating a local directory to save our data so we can mount it later in the container /data/db directory.

The last thing we need to do before running our application is to set the mongodb information in our application.yml file.

Adding data to MongoDB

We are going to use the CommandLineRunner to add data to our local MongoDB.

Now that we have everything ready we can now run our Spring Boot Application and as expected find out that our data have been saved to our MongoDB.

We can now access our api in the browser by going to the following url: http://localhost:8080/products?page=0&size=1. Our API is returning the response paginated with one item per page and since we have added 5 documents to our collection we can see that we have indeed 5 pages, the first being page 0 and the last being page 4.

Conclusion

One last thing before we go! If we also need to do any kind of sorting we can add a Sort object as the third parameter of the PageRequest static method as shown in the getAllProductsPaginatedAndSorted method bellow:

Project Repository

Comments

Popular posts from this blog

How to use Splunk SPL commands to write better queries - Part I

Introduction As a software engineer, we are quite used to deal with logs in our daily lives, but in addition to ensuring that the necessary logs are being sent by the application itself or through a service mesh, we often have to go a little further and interact with some log tool to extract more meaningful data. This post is inspired by a problem I had to solve for a client who uses Splunk as their main data analysis tool and this is the first in a series of articles where we will delve deeper and learn how to use different Splunk commands. Running Splunk with Docker To run Splunk with docker, just run the following command: docker run -d —rm -p 8000:8000 -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD=SOME_PASSWORD --name splunk splunk/splunk:latest Sample Data We are going to use the sample data provided by Splunk. You can find more information and download the zip file from their web site . How does it work? In order to be able to interact with Splunk t...

How to become a Blockchain developer and write your first Smart Contract

Introduction This is an introductory article to help you understanding the tools and frameworks needed so that you can know from where and how to start creating your own Smart Contracts. In this post I will give you an overview of the tools, frameworks, libraries and languages used to create a Smart Contract in the Ethereum Blockchain . In the second part of this article, we are going to see how to create a Smart Contracts using Solidity and ee are also going to see how to run a Blockchain locally using Ganache , so that you can deploy, interact and test your Smart Contract in your local development environment. According to a definition from the Wikipedia website: A blockchain is a decentralized, distributed, and often public, digital ledger consisting of records called blocks that are used to record transactions across many computers so that any involved block cannot be altered retroactively, without the alteration of all subsequent blocks.. What do you need to know? T...

How to run OPA in Docker

From the introduction of the openpolicyagent.org site: OPA generates policy decisions by evaluating the query input against policies and data. In this post i am going to show you an easy and fast way to test your policies by running OPA in Docker. First, make sure you have already installed Docker and have it running: docker ps Inside your choosen directory, create two files. One called input.json file for your system representation and one file called example.rego for your rego policy rules. Add the following content to your json file: Add the following content for the example.rego: Each violation block represents the rule that you want to validate your system against. The first violation block checks if any of the system servers have the http protocol in it. If that is the case, the server id is added to the array. In the same way, the second violation block checks for the servers that have the telnet protocol in it and if it finds a match the server id is also...