Search This Blog

Wednesday, October 23, 2019

upload xl/CSV to S3 and process it via lambda for db insert

create a bucket 
create a lambda function and add trigger function as S3
when you create lambda a role will be created for you, add additional policy to it , dynamodb and S3 policy
Now add the following code in the lambda_handler function of the lambda
========================================
import json
import boto3

s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')

def lambda_handler(event, context):
    # TODO implement
   
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']

    obj = s3.get_object(Bucket=bucket, Key=key)

    rows = obj['Body'].read().decode('utf-8') .split('\n')

    table = dynamodb.Table('entity')
   
    print(len(rows))
    with table.batch_writer() as batch:
        for row in rows[:-1]:
            batch.put_item(Item={

                'name':row.split(',')[0],
                'address':row.split(',')[1]
            })
           
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

===================================
^^ make sure you have a dynamodb with table name entity and field name,address 
you can add as many fields as you want.

Now upload the csv to s3 and you should see lambda processing your xl and inserting to dynamo.

=============================
Sample csv
name,address,city
target,23230,austin
walmart,77707,houston
macy,80808,dallas


First row will be skipped as per the lambda for loop 

Friday, October 18, 2019

Connect to posgres on aws problem

create a new rds instance but make sure you select advanced options
and create a starter database else you won't be able to connect to
posgres and get errors.

so an initial db has to be there if you connecting via sqlworkbench
or eclipse or psql.

connect to ec2 via windows

Download the pem file from aws and convert to ppk via the puttygen,
Create a new session and in auth tab browse to ppk
enter user name as ec2-user and url and public dns or ec2

Tuesday, October 1, 2019

How to run Ubuntu 18 on SSD and Windows 10 on HDD using dual boot

Get two pen drives and make one of them the recovery disk for windows.
Just search for recovery disk on windows and it will help you create
one, might take 2 hours.
Now your backup plan is ready in case things go south.

Download ISO from ubuntu site and burn it over the other pen drive using rufus
Change boot order in windows bios to boot from pen drive first and
move HDD after that.

Restart pc and let it boot from usb and select the install option
Navigate to the page where it asks you if you want to install along
windows or erase or select something else

You need to select something else, assuming your SSD is 256 gig and
your RAM is 8 gig.
Now identify your ssd in the disk section and make two partitions

e. g /dev/sdb5 230 GB / on WD256DIAXX


SWAP primary at the end 16 GB
ext4 primary at the front left over 230~ gb

MOST important make sure boot loader (option at the bottom) is set to
the master hdd where windows MBR resides so that
GRUB is updated correctly, if you mess up this option ubuntu won't be
added to GRUB.

Now just do next next and it will install and reboot later, it will work fine
Ubuntu will work out of ssd with a grub entry and windows will work
from grub /dev/sda

It worked for me !