Blog

Viewing posts from June, 2017

AWS IPv6 address to Existing EC2 and VPC Guide

Apple now forcing app developers to upload IOS builds with backend apis having support of IPv6 only. So sooner or later you have to add support of ipv6. 

Here is guide to setup ipv6 in AWS having ubuntu instance.

1. Go to VPC console and select existing vpc:
Actions > Edit CIDRs > in block "VPC IPv6 CIDRs" Associate auto IPv6 CIDR Block > Update
2. Go to VPC console > Subnets > Select subnets one by one and do following for all or yours selected one:
Actions > Edit IPV6 CIDRs > Associate auto IPv6 CIDR Block > Update
Actions > Modify Auto Assign IP Settings > Check both ipv4 and ipv6

3. Go to VPC Console > Route Tables :

Bash Shell Script to backup RDS/EC2 PostgreSQL DB and upload to S3 weekly

#!/bin/bash
# Run as sudo. for weekly backup of db. and upload to s3 bucket.
DBHOME="/home/priyank/crontabs/dbbackups/"
BUCKETNAME="yourAWSbucket"
SCRIPTNAME="$(basename $BASH_SOURCE)"
SCRIPTFULLPATH="$(pwd)/$(basename $BASH_SOURCE)"
mkdir -p $DBHOME
chown -R postgres:postgres $DBHOME
cp $SCRIPTFULLPATH $DBHOME
SCHEMA_BACKUP="$DBHOME/$(date +%w).sql"
sudo -u postgres touch $SCHEMA_BACKUP
sudo -u ubuntu echo "" > $SCHEMA_BACKUP
sudo -u postgres PGPASSWORD="yourPGpassword" pg_dump -h localhost -p 5432 -U postgres -F p -b -v --column-inserts --data-only -f $SCHEMA_BACKUP "yourDBname"
CRONPATH="$DBHOME$SCRIPTNAME"
chmod +x $CRONPATH
FLAGCHK=0
crontab -l | grep -q "$SCRIPTNAME" && FLAGCHK=1 || (crontab -l | { cat; echo "00 23 * * * $CRONPATH"; } | crontab -)
if [ $FLAGCHK -eq 0 ]
then
apt-get install s3cmd
s3cmd --configure
fi
s3cmd put $SCHEMA_BACKUP "s3://$BUCKETNAME/dbbackups/"

Bash Script to backup RDS/EC2 MySQL DB and upload to S3 weekly

You may come across task to write cronjob that takes backup of db every day/week/month and upload to aws s3. 
Here is shell script to do that job. make sure to replace bucket name, credentials with yours.

#!/bin/bash
# Run as sudo. for weekly backup of db. and upload to s3 bucket.
DBHOME="/home/ubuntu/priyank/crontabs/dbbackups/"
BUCKETNAME="yourAWSbucket"
SCRIPTNAME="$(basename $BASH_SOURCE)"
SCRIPTFULLPATH="$(pwd)/$(basename $BASH_SOURCE)"
mkdir -p $DBHOME
chown -R ubuntu:ubuntu $DBHOME
cp $SCRIPTFULLPATH $DBHOME
SCHEMA_BACKUP="$DBHOME/$(date +%w).gzip"
sudo -u ubuntu touch $SCHEMA_BACKUP
sudo -u ubuntu echo "" > $SCHEMA_BACKUP
sudo -u ubuntu mysqldump -P <yourDBport> -h <yourDBHost> -u <yourDBUser> -p<yourDBpassword> --force --opt --databases <yourDBName> | gzip -c > $SCHEMA_BACKUP
CRONPATH="$DBHOME$SCRIPTNAME"
chmod +x $CRONPATH
FLAGCHK=0
crontab -l | grep -q "$SCRIPTNAME" && FLAGCHK=1 || (crontab -l | { cat; echo "00 23 * * * $CRONPATH"; } | crontab -)
if [ $FLAGCHK -eq 0 ]
then
apt-get install s3cmd
s3cmd --configure
fi
s3cmd put $SCHEMA_BACKUP "s3://$BUCKETNAME/dbbackups/"

Aurora Triggers to Call AWS lambda function

Recently I needed to call my lambda function when CRUD happen on my aurora db table. AWS Aurora supports accesing AWS services. 
So If you want integrate such architecture , you can follow following step by step guide to make it work. 

1) Create RDS & Lambda full access Role with principal as "rds.amazonaws.com". ( arn:aws:iam::<account_id>:role/RDS-Lambda-Access )
2) Edit aurora parameter group and assign ARN of 1)
3) Edit aurora Clustor and also from `Managed IAM Roles` assign role created in 1).
4) Rebooted aurora instance.