Skip to main content

Getting to know the Lambda Event Object

With any kind of http request, being it a simple web server or an api gateway, we will usually need to log different kind of information regarding the received request. But how can we do that with Lambda and how can we get all available information from it?

Thats when the event object comes in place. The API GATEWAY will send all information about the request encapsulated in the event object passing it as an argument to our lambda function, like this:


exports.handler = (event, context, callback) => {

};

The event object have the following nested objects as shown below:

  • 1. resource
  • 2. path
  • 3. httpMethod
  • 4. headers
  • 5. multiValueHeaders
  • 6. queryStringParameters
  • 7. multiValueQueryStringParameters
  • 8. pathParameters
  • 9. stageVariables
  • 10. requestContext
  • 11. body
  • 12. isBase64Encoded

{
   "resource": "/",
   "path": "/",
   "httpMethod": "GET",
   "headers": {},
   "multiValueHeaders":{
      "accept-encoding":[],
      "cookie":[],
      "Host":[],
      "User-Agent":[],
      "X-Amzn-Trace-Id":[],
      "X-Forwarded-For":[],
      "X-Forwarded-Port":[],
      "X-Forwarded-Proto":[],
      "x-real-ip":[]
   },
   "queryStringParameters":{},
   "multiValueQueryStringParameters":{},
   "pathParameters":null,
   "stageVariables":null,
   "requestContext":{
      "resourceId":"",
      "resourcePath":"",
      "httpMethod":"",
      "extendedRequestId":"",
      "requestTime":"",
      "path":"",
      "accountId":"",
      "protocol":"",
      "stage":"",
      "requestTimeEpoch":,
      "requestId":"",
      "identity":{
         "cognitoIdentityPoolId":null,
         "accountId":null,
         "cognitoIdentityId":null,
         "caller":null,
         "sourceIp":"",
         "accessKey":null,
         "cognitoAuthenticationType":null,
         "cognitoAuthenticationProvider":null,
         "userArn":null,
         "userAgent":"",
         "user":null
      },
      "body": null,
      "isBase64Encoded"
   }
}

Comments

Popular posts from this blog

How to run OPA in Docker

From the introduction of the openpolicyagent.org site: OPA generates policy decisions by evaluating the query input against policies and data. In this post i am going to show you an easy and fast way to test your policies by running OPA in Docker. First, make sure you have already installed Docker and have it running: docker ps Inside your choosen directory, create two files. One called input.json file for your system representation and one file called example.rego for your rego policy rules. Add the following content to your json file: Add the following content for the example.rego: Each violation block represents the rule that you want to validate your system against. The first violation block checks if any of the system servers have the http protocol in it. If that is the case, the server id is added to the array. In the same way, the second violation block checks for the servers that have the telnet protocol in it and if it finds a match the server id is also...

How to create a REST API Pagination in Spring Boot with Spring HATEOAS using MongoDB

Introduction In this post we are going to see how we can create a REST API pagination in Spring Boot with Spring HATEOAS and Spring Data MongoDB . For basic queries, we can interact with MongoDB using the MongoRepository interface which is what we are going to use in this tutorial. For more advanced operations like update and aggregations we can use the MongoTemplate class. With Spring applications we start adding the needed dependencies to our pom file if using Maven as our build tool. For this project we are going to use the following dependencies: Spring Web , Spring Data MongoDB and Spring HATEOAS . To quickly create your Spring Boot project with all your dependencies you can go to the Spring Initializr web page. This is how your project should look like: As with any MVC application like Spring there are some minimal layers that we need to create in our application in order to make it accessible like the Controller , Service , Model and Repository layers . For this...

Log Aggregation with ELK stack and Spring Boot

Introduction In order to be able to search our logs based on a key/value pattern, we need to prepare our application to log and send information in a structured way to our log aggregation tool. In this article I am going to show you how to send structured log to ElasticSearch using Logstash as a data pipeline tool and how to visualize and filter log information using Kibana. According to a definition from the Wikipedia website: Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. According to Elasticsearch platform website , Elasticsearch is the heart of the Elastic stack, which centrally stores your data for lightning fast search. The use of Elasticsearch, Kibana, Beats and Logstash as a search platform is commonly known as the ELK stack. Next we are going to start up Elasticsearch, Kibana and Logstash using docker so we can better underst...