Skip to main content

A Retrospective board to call your own made with React, GraphQL and MongoDB

Introduction

I have been working in a project to enable anyone to fork and run a retrospective board in its own server. I have created 2 (two) projects: one for the UI with React and GraphQL using Apollo client and one for the GraphQL server also using Apollo (Server) and MongoDB for the database.

In the following sections I am going to walk you through the steps and explain how to run both projects.

Main features

The board main features:
- History of your past iteractions saved in the Mongo database
- Easy url link format to quick access any past iteractions
- Automatically add action items as last action items in the new board
- Quick move items to Action Items

Coming next

You can fork the React UI from the following GitHub repo:
- Support for multiple teams
- Authentication
- Subscriptions (pub/sub)

Code Repo

You can fork the React UI from the following GitHub repo:
React Retrospective UI
You can fork the GraphQL Server from the following GitHub repo:
Retrospective GraphQL Server

Running the React UI

Let's start by cloning the forked React UI repo:
git clone https://github.com/cjafet/react-graphql-retro-board.git
Now change to the cloned directory and run the following command to install all project dependencies:
npm i
With all the project dependencies installed I will open the project in Visual Studio Code by running the following command in the terminal:
code .
Run the following npm command to start and open the React board in the browser:
npm run start

You should see the following image since we don't have any boards yet and the the GraphQL server is not running:

Running the GraphQL Server

Next, let's clone and start the GraphQL server so we can create our first retrospective board!
git clone https://github.com/cjafet/apollo-graphql-server
Now change to the cloned directory and run the following command to install all project dependencies:
npm i
Run the following npm command to start the graphQL server:
npm run start
With our graphQL server running we can now create our first retrospective board!

Creating a new Retrospective

When doing this for the first time the UI will show a tex input so you can set your team name.
Enter the values for your team name and your iteration number.
Click on the link to access your new retrospective board! You should see an image similar to this one!
A retrospective board with the items from the last sprint should look like this!
When the next retrospective board is created, all the action items from the last sprint will be loaded as the last action items in your current sprint:

Conclusion

You can fork the react-graphql-retro-board and apollo-graphql-server projects mentioned in the beginning of this article to keep track of all of your team retrospective boards. The "My Retros" menu provide an easy access to all of your retrospectives saved in the mongoDB and you can also easily access any retrospective number by simpling following the board/team/retro# uri pattern. You can configure the application to use a cloud version of the MongoDB by simply going to the Atlas MongoDB Cloud website and register for a free trial.

Github project

React Retrospective UI
Retrospective GraphQL Server

References

GraphQL | A query language for your API
Apollo GraphQL
MongoDB
MongoDB Atlas Database

Comments

Popular posts from this blog

Log Aggregation with ELK stack and Spring Boot

Introduction In order to be able to search our logs based on a key/value pattern, we need to prepare our application to log and send information in a structured way to our log aggregation tool. In this article I am going to show you how to send structured log to ElasticSearch using Logstash as a data pipeline tool and how to visualize and filter log information using Kibana. According to a definition from the Wikipedia website: Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. According to Elasticsearch platform website , Elasticsearch is the heart of the Elastic stack, which centrally stores your data for lightning fast search. The use of Elasticsearch, Kibana, Beats and Logstash as a search platform is commonly known as the ELK stack. Next we are going to start up Elasticsearch, Kibana and Logstash using docker so we can better underst

Selection Sort Explained

Introduction If you are trying to get a remote job in a top IT consulting company, you will definitely fall into a live code exercise where your algorithms, logical thinking and problem solving skills will be tested and you will have to demonstrate a solid knowledge of these concepts. Today I decided to write about a type of sorting algorithm that I found several times in interviews and decided, after studying the approach used, to create an initial solution in the simplest possible way. Understanding the logic As we know, the sort algorithm basically uses three basic principles to sort the items in a list. A comparator, a swap function, and recursion. For this selection sort algorithm I will focus in the first two. Given that we have the following list of numbers: 64, 25, 12, 22, 11, how would we use selection sort to swap and sort the list in an ascending order? The following code from the init function uses two for loops to create a temporary list (line 2) with the r

How to use Splunk SPL commands to write better queries - Part I

Introduction As a software engineer, we are quite used to deal with logs in our daily lives, but in addition to ensuring that the necessary logs are being sent by the application itself or through a service mesh, we often have to go a little further and interact with some log tool to extract more meaningful data. This post is inspired by a problem I had to solve for a client who uses Splunk as their main data analysis tool and this is the first in a series of articles where we will delve deeper and learn how to use different Splunk commands. Running Splunk with Docker To run Splunk with docker, just run the following command: docker run -d —rm -p 8000:8000 -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD=SOME_PASSWORD --name splunk splunk/splunk:latest Sample Data We are going to use the sample data provided by Splunk. You can find more information and download the zip file from their web site . How does it work? In order to be able to interact with Splunk t