Skip to main content

Getting to know the S3 Event Object

In this post i am going to show you how you can get into the content of the file that trigged your lambda.

As we have seen in the previous post, the event object is passed as an argument to our lambda function, like this:


exports.handler = (event, context, callback) => {

};

When you place a trigger on a Lambda function based on a S3 event, S3 will pass all the context information that you need inside of the event object. In this case, our lambda will be trigged by an S3 event and the event object that will be passed to it will have a Record property with an array of objects inside of it like is shown below:

{
   "event":{
      "Records":[
         {
            "eventVersion":"2.0",
            "eventSource":"aws:s3",
            "awsRegion":"us-east-1",
            "eventTime":"2018-09-22T14:25:20.411Z",
            "eventName":"ObjectCreated:Put",
            "userIdentity":{
               "principalId":""
            },
            "requestParameters":{
               "sourceIPAddress":""
            },
            "responseElements":{
               "x-amz-request-id":"",
               "x-amz-id-2":""
            },
            "s3":{
               "s3SchemaVersion":"1.0",
               "configurationId":"",
               "bucket":{
                  "name":"",
                  "ownerIdentity":{
                     "principalId":""
                  },
                  "arn":""
               },
               "object":{
                  "key":"",
                  "size":226,
                  "eTag":"",
                  "versionId":"",
                  "sequencer":""
               }
            }
         }
      ]
   }
}

You will want to pay a special attention to the S3 property here. This property contains an object that have two important properties for us, the bucket and the key. Using those information, that is, the bucket name and the file name, that we can get into the content of our S3 file like this:


var bucket = event.Records[0].s3.bucket.name;
var file = event.Records[0].s3.object.key;

var params = {
  Bucket: bucket, 
  Key: file
 };
 s3.getObject(params, function(err, data) {
   if (err) console.log(err, err.stack);
   else     console.log(JSON.parse(data.Body.toString('utf-8'))); 
 });

This will return a data object containing your object properties, assuming you hava a json object on it. The JSON.parse() here is used to convert it back to an object, instead of having it as string.

Comments

Popular posts from this blog

How to use Splunk SPL commands to write better queries - Part I

Introduction As a software engineer, we are quite used to deal with logs in our daily lives, but in addition to ensuring that the necessary logs are being sent by the application itself or through a service mesh, we often have to go a little further and interact with some log tool to extract more meaningful data. This post is inspired by a problem I had to solve for a client who uses Splunk as their main data analysis tool and this is the first in a series of articles where we will delve deeper and learn how to use different Splunk commands. Running Splunk with Docker To run Splunk with docker, just run the following command: docker run -d —rm -p 8000:8000 -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD=SOME_PASSWORD --name splunk splunk/splunk:latest Sample Data We are going to use the sample data provided by Splunk. You can find more information and download the zip file from their web site . How does it work? In order to be able to interact with Splunk t...

How to run OPA in Docker

From the introduction of the openpolicyagent.org site: OPA generates policy decisions by evaluating the query input against policies and data. In this post i am going to show you an easy and fast way to test your policies by running OPA in Docker. First, make sure you have already installed Docker and have it running: docker ps Inside your choosen directory, create two files. One called input.json file for your system representation and one file called example.rego for your rego policy rules. Add the following content to your json file: Add the following content for the example.rego: Each violation block represents the rule that you want to validate your system against. The first violation block checks if any of the system servers have the http protocol in it. If that is the case, the server id is added to the array. In the same way, the second violation block checks for the servers that have the telnet protocol in it and if it finds a match the server id is also...

API GATEWAY: THE CLOUDFRONT 403 FORBIDDEN ERROR

If you are having a 403 Forbidden error from CloudFront , that means your domain name is not linked to your CloudFront distribution and because CloudFront stays in front of your API GATEWAY you need to create a CNAME record pointing your domain name to your CloudFront target domain name in order for it to work. So, If you need to point your api to a custom domain name, all you have to do is following those 2 easy steps: 1 - CREATING YOUR CUSTOM DOMAIN NAME Go to the API GATEWAY console and click on the Custom Domain Name menu. Click on the Create Custom Domain Name button. Next, assign a certificate matching the same domain name you are creating and map to the root path and destination of your desired api. Lastly, copy the CloudFront Target Domain Name . You will need to paste that in your Route 53 record. 2 - CREATING A CNAME RECORD ON ROUTE 53 Create a CNAME record on Route 53 for the same custom domain name, assigning to it the CloudFront Target Domain. CONCLUSION ...