Skip to main content

What is the Elastic Stack and how to post data to an ElasticSearch DB in an Amazon ES Service

Amazon ES Service is a fully managed system that makes it easy to deploy Elastic Stack to AWS servers in an integrated way. Some features like installing Kibana plugins are not yet available.

ElasticSearch is part of the Elastic Stack, a group of tools/services from the Elastic Company (elastic.co)

Elastic Stack:
* Kibana
* ElasticSearch
* Beats
* Logstash

ElasticSearch is a NoSQL document database and is most common used with Kibana, a UI tool to visualize data from the ES database. ElasticSearch is used for high speed text search.

It was previously known as the ELK Stack because of the tools/services from Elastic Company that are used togheter:
* ElasticSearch
* Logstash
* Kibana

In elasticsearch db you post data to an index in the same way you insert data into tables in a RDBMS (Relational Database Management Systems). We use index to separate and group different types of information (data) in the same way we use tables in a database.

SO INDEXES ARE FOR ES DB´S WHAT TABLES ARE FOR A RDBMS.

To POST data to a ES DB we need to construct our url in the following format:
url = host/index/type

We also need to set aws credentials with the aws service, region and our access keys and secret.

To save data in AWS ES Service you need to send a post request to your ES endpoint domain. One great thing of a NoSQL database is the ability to send JSON objects to the engine while making the properties of our object searchable by the database.

Example of a POST request made in node-fetch:
fetch('https://endpoint.region.es.amazonaws.com:443/nameof-your-index/doc-type', { method: 'POST', body: product_obj, headers: { 'Content-Type': 'application/json' } })
     .then(res => {
         return res.json();
     }).catch( err => {
         console.log(err);
     }); 
You can downlaod the complete source code, from the Github.

Comments

Popular posts from this blog

Log Aggregation with ELK stack and Spring Boot

Introduction In order to be able to search our logs based on a key/value pattern, we need to prepare our application to log and send information in a structured way to our log aggregation tool. In this article I am going to show you how to send structured log to ElasticSearch using Logstash as a data pipeline tool and how to visualize and filter log information using Kibana. According to a definition from the Wikipedia website: Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. According to Elasticsearch platform website , Elasticsearch is the heart of the Elastic stack, which centrally stores your data for lightning fast search. The use of Elasticsearch, Kibana, Beats and Logstash as a search platform is commonly known as the ELK stack. Next we are going to start up Elasticsearch, Kibana and Logstash using docker so we can better underst...

How to create a REST API Pagination in Spring Boot with Spring HATEOAS using MongoDB

Introduction In this post we are going to see how we can create a REST API pagination in Spring Boot with Spring HATEOAS and Spring Data MongoDB . For basic queries, we can interact with MongoDB using the MongoRepository interface which is what we are going to use in this tutorial. For more advanced operations like update and aggregations we can use the MongoTemplate class. With Spring applications we start adding the needed dependencies to our pom file if using Maven as our build tool. For this project we are going to use the following dependencies: Spring Web , Spring Data MongoDB and Spring HATEOAS . To quickly create your Spring Boot project with all your dependencies you can go to the Spring Initializr web page. This is how your project should look like: As with any MVC application like Spring there are some minimal layers that we need to create in our application in order to make it accessible like the Controller , Service , Model and Repository layers . For this...

Understanding RabbitMQ

Introduction RabbitMQ is a centralized message broker based on the AMQP (Advanced Message Queuing Protocol) protocol, acting as a Middleware between Producers and Consumers of different systems. In a message system, Publishers sends a message to a message broker where messages are consumed some time later by one or more Subscribers. By introducing a message brokeer between systems we are decoupling the sender application from the receiver. In this case the service that is responsible for sending or publishing the message does not need to know about any other service. All it needs to care about is the message and its format. With a message system you send a message to a message broker first and when the consumers or listeners of it become online they can start consuming from the message queue. This means you can keep sending messages without even care if the other application is online or if they had any failures. RabbitMQ Architecture Exchange, queue and bindings are the ...