Search This Blog

Saturday 29 August 2020

To prevent CloudFront from caching certain files

Resolution

To prevent CloudFront from caching certain files, use one of the following configurations:

Configuration on the origin

Note: Be sure to update your CloudFront distribution's cache behavior to set Object Caching as Use Origin Cache Headers.

On your custom origin web server application, add Cache-Control no-cacheno-store, or private directives to the objects that you don't want CloudFront to cache. Or, add Expires directives to the objects that you don't want CloudFront to cache.

If you're using Amazon Simple Storage Service (Amazon S3) as the origin, you can add certain Cache-Control headers using object metadata.

Configuration on the distribution

  1. Open the CloudFront console.
  2. From your list of CloudFront distributions, choose the distribution that you want to modify.
  3. Choose the Behaviors tab.
  4. If you already have a cache behavior for the objects that you don't want CloudFront to cache, select the cache behavior and choose Edit. To create a new cache behavior, choose Create Behavior.
  5. In the cache behavior's settings, enter the following to prevent caching:

    For Object Caching, select Customize.

    For Minimum TTL, enter 0.

    For Maximum TTL, enter 0.

  6. Choose Create to save the changes that you made.

Note: If you aren't using an Amazon S3 bucket as your origin, you can set a specific cache behavior to forward all headers to the origin. To do this, update the cache behavior to set Cache Based on Selected Request Headers to All.

Lambda function for weekly commit into github --> python code

Resolution

import github

import time

from github import Github

def lambda_handler(event, context):

g = Github(base_url="https://github.com/api/v3", 

login_or_token="*********************************")

repo = g.get_user().get_repo("company-search-weekly-commit-repo")

contents = repo.get_contents("weekly-commit")

repo.update_file(contents.path, "committed file on " + time.asctime(), "committed on " + time.asctime(), contents.sha, branch="master")

return context.aws_request_id

Installing Google Chrome in Rhel 7

Resolution

https://access.redhat.com/discussions/917293

wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm

yum -y install redhat-lsb libXScrnSaver

yum -y localinstall google-chrome-stable_current_x86_64.rpm

Set up approvals in Github

Resolution

In this section, you set up a rule on GitHub that requires at least one reviewer to approve a pull request before it can be merged into the master branch. You set up the rule and then verify that it works by pushing up a fix to the typing error that Mara made earlier.

Add the rule

  1. In GitHub, go to your XXXXXXXX project repository.
  2. Select the Settings tab near the top of the page.
  3. On the menu, select Branches.
  4. Make sure that master is selected as your default branch.
  5. Select Add rule.
  6. Under Branch name pattern, enter master.
  7. Select the Require pull request reviews before merging check box.
  8. Keep the Required approving reviews value at 1.
  9. To create the rule in the master branch, select Create.
  10. Select Save changes.

S3 point in time restore

Resolution

https://github.com/madisoft/s3-pit-restore

s3-pit-restore -b dev-aes-digital-habitat -B dev-aes-digital-habitat -t "02-14-2020 2:37:37 +2"

How do I push custom metrics from an EC2 Linux instance to CloudWatch?

Resolution

I want to monitor OS metrics and performance counters of my Amazon Elastic Compute Cloud (EC2) Linux instance. How can I do this using Amazon CloudWatch?

Short Description

You can create a custom CloudWatch metric for your EC2 Linux instance statistics by creating a script through the AWS Command Line Interface (AWS CLI). Then, you can monitor that metric by pushing it to CloudWatch.

Resolution

Before proceeding, be sure that you install and configure the AWS CLI for use with the instance that you want to monitor.

Create a custom CloudWatch metric

To create your custom metric:

  1. Log in to the instance through the AWS CLI.

  2. Copy the following bash script, and then save it to your instance (for example, mem.sh).

This example script shows the values that you can publish in CloudWatch. In this example, the put-metric-data API call is used to push the following values to CloudWatch:

  • Percentage of used memory (USEDMEMORY)
  • Number of total connections (TCP_CONN)
  • Number of TCP connections on port 80 (TCP_CONN_PORT_80)
  • Number of users currently logged in (USERS)
  • Percentage of I/O wait time (IO_WAIT)

========Sample script======

#!/bin/bash

USEDMEMORY=$(free -m | awk 'NR==2{printf "%.2f\t", $3*100/$2 }')

TCP_CONN=$(netstat -an | wc -l)

TCP_CONN_PORT_80=$(netstat -an | grep 80 | wc -l)

USERS=$(uptime |awk '{ print $6 }')

IO_WAIT=$(iostat | awk 'NR==4 {print $5}')

aws cloudwatch put-metric-data --metric-name memory-usage --dimensions Instance=i-0c51f9f1213e63159 --namespace "Custom" --value $USEDMEMORY

aws cloudwatch put-metric-data --metric-name Tcp_connections --dimensions Instance=i-0c51f9f1213e63159 --namespace "Custom" --value $TCP_CONN

aws cloudwatch put-metric-data --metric-name TCP_connection_on_port_80 --dimensions Instance=i-0c51f9f1213e63159 --namespace "Custom" --value $TCP_CONN_PORT_80

aws cloudwatch put-metric-data --metric-name No_of_users --dimensions Instance=i-0c51f9f1213e63159 --namespace "Custom" --value $USERS

aws cloudwatch put-metric-data --metric-name IO_WAIT --dimensions Instance=i-0c51f9f1213e63159 --namespace "Custom" --value $IO_WAIT

===============================================

  1. After creating the bash script, give execute permissions to the file.

$ chmod +x mem.sh

  1. Run the bash script to check that it works.

Push your metric to CloudWatch

  1. Create a cron job:

$ crontab -e

  1. Add this line to execute your script every minute:

*/1 * * * * /home/ec2-user/mem.sh

  1. Save and exit.

When the crontab is saved, crontab: installing new crontab appears.

Monitor your EC2 instance

Find your custom metric in the CloudWatch console:

  1. Open the CloudWatch console.

  2. Choose Metrics.

  3. Choose the All Metrics tab.

  4. Choose Custom.

  5. Choose the dimension Instance.

  6. Select your custom metric by its InstanceId and Metric Name.

  7. View the graph of your metric.

Other uses

You can use this example to build your own logic to process multiple dimensions, and then push that metric data to CloudWatch.

For example, suppose that you benchmark your application. Then, you discover that when the I/O wait time and percentage memory usage reach a certain threshold, the system stops functioning properly. To address this problem, you can monitor both values simultaneously in a script. Store the logical AND of the values in a third variable that you push to CloudWatch.

c=0

if [[ $IO_WAIT > 70 && $USEDMEMORY > 80 ]]

then

c=1

fi

aws cloudwatch put-metric-data --metric-name danger --dimensions Instance=i-0c51f9f1213e63159 --namespace "Custom" --value $c

For normal conditions, this variable is 0 (zero). For situations when both conditions are met, the value is set to 1 (one). You can then build custom alarms around these parameters to warn of problematic situations for your system.

 

Circular dependency errors in AWS Cloudformation

Resolution

https://aws.amazon.com/blogs/infrastructure-and-automation/handling-circular-dependency-errors-in-aws-cloudformation/

To install SSM agent in RHEL:

Resolution

sudo yum install -y [https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm](https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm)

sudo systemctl status amazon-ssm-agent (to chk status)

sudo systemctl enable amazon-ssm-agent (to enable ssm )

IAM role should be attached


PSQL+RDS dump script in AWS with RDS

Resolution

#!/bin/bash

PASSWORD="XXXXXXXXX"

RDS_ENDPOINT=XXXXXXXXXXXXXXXXX

PORT=5432

DUMPNAME=dev-api-db-psql-dump.sql

USER=postgres

DBNAME=dev_aes_database

TARFILENAME=dev-api-db-psql-dump-"$(date +"%d-%m-%Y--%H-%M")".sql.tar

S3_PATH=XXXXXXX

LOCALPATH=/opt/aes-np-api-db-psql-dump

LocalFile=dev-api-db-psql-dump*

cd $LOCALPATH
PGPASSWORD=$PASSWORD pg_dump -h $RDS_ENDPOINT -p $PORT -f $DUMPNAME -U $USER $DBNAME

# Command to create DB dump from psql database

tar -czf $TARFILENAME $DUMPNAME

# This will create zipped tar file from DB dump

rm -rf $DUMPNAME

# This will remove dump file from server i.e. from /opt/aes-np-api-db-psql-dump

aws s3 mv $LOCALPATH/$LocalFile $S3_PATH

# copying local tar file to s3 bucket 

START and STOP service script in LINUX

Resolution

cd /etc/systemd/system


sudo vi fusion.service

[Unit]

Description=AES fusion Application

Requires=network-online.target

After=network-online.target

[Service]

Type=simple

User=opsadminuser

ExecStart=/bin/sh -c '/opt/lucidworks/fusion/4.2.1/bin/fusion start > /opt/lucidworks/fusion/4.2.1/var/log/app-start.log 2>&1'

RemainAfterExit=yes

ExecStop=/bin/sh -c '/opt/lucidworks/fusion/4.2.1/bin/fusion stop > /opt/lucidworks/fusion/4.2.1/var/log/app-stop.log 2>&1'

Restart=on-failure

[Install]

WantedBy=multi-user.target

sudo chmod 755 fusion.service

sudo systemctl daemon-reload

sudo systemctl enable fusion.service

Step 1:

Create a START.sh

/usr/bin/java -jar -Dspring.profiles.active=dev -Dlog.file=/opt/java/aes_logs/aes_logs /opt/java/iot/dtc-services-boot-smartfactory-0.0.1-SNAPSHOT-exec.jar --spring.config.name=application -Xms2G -Xmx4G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70 -XX:-UseGCOverheadLimit

Step 2:

Go to /etc/systemd/system

Step 3:

then after create a example.service file

EXAMPLE 1:

[Unit]

Description=Smart Factory Application

Requires=network.target remote-fs.target

After=network.target remote-fs.target

[Service]

Type=simple

User=opsadminuser

ExecStart=/bin/sh -c '/opt/java/iot/connector-start.sh > /opt/java/iot/logs/app.log 2>&1'

SuccessExitStatus=143

[Install]

WantedBy=multi-user.target

EXAMPLE 2:

[Unit]

Requires=zookeeper.service

After=zookeeper.service

[Service]

Type=simple

User=opsadminuser

ExecStart=/bin/sh -c '/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties > /opt/kafka/logs/kafka.log 2>&1'

ExecStop=/opt/kafka/bin/kafka-server-stop.sh

Restart=on-abnormal

[Install]

WantedBy=multi-user.target

Installing and Configuring Cloud Watch Logs agent

Resolution

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html

  1. IAM policy required for the instance to send Cloudwatch logs
{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Allow",

"Action": [

"logs:CreateLogGroup",

"logs:CreateLogStream",

"logs:PutLogEvents",

"logs:DescribeLogStreams"

],

"Resource": [

"arn:aws:logs:*:*:*"

   ]

     }

   ]

}
  1. For Downloading and Configuring Agent
    1. sudo curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
    2. sudo python ./awslogs-agent-setup.py --region us-east-1
    3. hostname
    4. %d-%m-%Y %H:%M:%S
  2. sudo vi /var/awslogs/etc/awslogs.conf
  3. sudo service awslogs restart

Download postgresql11 in RHEL 7.6

Resolution

https://www.postgresql.org/download/linux/redhat/

How to install Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances

Resolution

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html#mon-scripts-getstarted

STEP 1:

To install the required packages on Red Hat Enterprise Linux 7

Log on to your instance. For more information, see Connect to Your Linux Instance.

At a command prompt, install packages as follows:

sudo yum install unzip perl-Switch perl-DateTime perl-Sys-Syslog perl-LWP-Protocol-https perl-Digest-SHA --enablerepo="rhui-REGION-rhel-server-optional" -y

STEP 2:

Install Monitoring Scripts

The following steps show you how to download, uncompress, and configure the CloudWatch Monitoring Scripts on an EC2 Linux instance.

To download, install, and configure the monitoring scripts

  1. At a command prompt, move to a folder where you want to store the monitoring scripts and run the following command to download them:

    sudo curl https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip -O

    1. Run the following commands to install the monitoring scripts you downloaded:

      sudo unzip CloudWatchMonitoringScripts-1.2.2.zip

      sudo rm CloudWatchMonitoringScripts-1.2.2.zip

    2. sudo chown -R opsadminuser:opsadminuser aws-scripts-mon/

    3. cd aws-scripts-mon

The package for the monitoring scripts contains the following files:

  • CloudWatchClient.pm – Shared Perl module that simplifies calling Amazon CloudWatch from other scripts.
  • mon-put-instance-data.pl – Collects system metrics on an Amazon EC2 instance (memory, swap, disk space utilization) and sends them to Amazon CloudWatch.
  • mon-get-instance-stats.pl – Queries Amazon CloudWatch and displays the most recent utilization statistics for the EC2 instance on which this script is executed.
  • awscreds.template – File template for AWS credentials that stores your access key ID and secret access key.
  • LICENSE.txt – Text file containing the Apache 2.0 license.
  • NOTICE.txt – Copyright notice.

sudo corntab -e


 * * * * * /home/opsadminuser/aws-scripts-mon/mon-put-instance-data.pl --mem-used-incl-cache-buff --mem-util --mem-used --mem-avail
 
* * * * * /home/opsadminuser/aws-scripts-mon/mon-put-instance-data.pl --disk-path=/dev/xvda2 --disk-space-util --disk-space-used --disk-space-avail --disk-space-units=gigabytes

* * * * * /home/opsadminuser/aws-scripts-mon/mon-put-instance-data.pl --disk-path=/dev/xvdf1 --disk-space-util --disk-space-used --disk-space-avail --disk-space-units=gigabytes

How to check password in Postgres-SQL command or query for particular User

Resolution

select user_login_id,user_password from aes_users where user_login_id='XXXXXX';

ORACLE Java Installation in RHEL 7.6

Resolution

First download the Oracle Java from Oracle website

https://www.oracle.com/technetwork/java/javase/downloads/index.html

then go to the server and SCP the jdk and installed the JAVA

sudo rpm -ivh jdk-8u211-linux-x64.rpm

sudo find / -name java

cd /usr/java/jdk1.8.0_211-amd64

pwd

vi ~/.bashrc

export JAVA_HOME=/usr/java/jdk1.8.0_211-amd64

source
~/.bashrc

java -version

JAVA Installation in RHEL 7.6

Resolution

First go into the server and check for JAVA is installed or not

java -version

rpm -qa | grep java

Then after if JAVA is not installed then install JAVA first

sudo yum search java 1.8

sudo yum install -y java-1.8.0-openjdk-devel

sudo find / -name jvm

cd /usr/lib/jvm

ls

cd java-1.8.0-openjdk-1.8.0.222.b10-0.el7_6.x86_64

pwd

Please change vi ~/.bashrc

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b10-0.el7_6.x86_64

export JRE_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre

source ~/.bashrc

echo $JAVA_HOME

AWS CLI Installation through bundle

Resolution

https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html

Install the AWS CLI without Sudo (Linux, macOS, or Unix)

If you don't have sudo permissions or want to install the AWS CLI only for the current user, you can use a modified version of the previous commands.

$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"

$ unzip awscli-bundle.zip

$ ./awscli-bundle/install -b ~/bin/aws

This installs the AWS CLI to the default location (~/.local/lib/aws) and creates a symbolic link (symlink) at ~/bin/aws. Make sure that ~/bin is in your PATH environment variable for the symlink to work.

$ echo $PATH | grep ~/bin // See if $PATH contains ~/bin (output will be empty if it doesn't)

$ export PATH=~/bin:$PATH // Add ~/bin to $PATH if necessary

Tip

To ensure that your $PATH settings are retained between sessions, add the export line to your shell profile (~/.profile, ~/.bash_profile, and so on).

Uninstall the AWS CLI version 1

The bundled installer doesn't put anything outside of the installation directory except the optional symlink, so uninstalling is as simple as deleting those two items.

$ sudo rm -rf /usr/local/aws'

$ sudo rm /usr/local/bin/aws

LVM Installation in RHEL 7.x

Resolution

Install lvm2 through by YUM

sudo yum install lvm2* -y

//create a PV in machine

sudo pvcreate /dev/xvdb

sudo pvs

//create a VG in machine

sudo vgcreate vg-fusion-01 /dev/xvdb

sudo vgs

sudo vgdisplay

//create a LV in machine

sudo lvcreate -L 199G -n lv-fusion-01 vg-fusion-01

sudo lvs

sudo fdisk -l

sudo lvs

sudo lvdisplay

//create a FILESYSTEM in machine

sudo mkfs.ext4 -j /dev/vg-fusion-01/lv-fusion-01

lsblk

//MOUNT the LV

sudo mkdir -p /opt/lucidworks

//change the PERMISSION of FOLDER

sudo chown -R opsadminuser:opsadminuser lucidworks/

sudo chmod o+x lucidworks/

//Permanent mount

vi /etc/fstab

/dev/vg-solr-01/lv-solr-01 /opt/lucidworks/ ext4 defaults 0 0

Docker Installation in Ubuntu 18.04

Resolution

Commands

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable"

sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
sudo apt-mark hold docker-ce
sudo docker version

DevOps Definition

DevOps is about people, process and products. To realize the full benefits of DevOps we'll address all three aspects. We'll start with the products aspect of DevOps

DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.

Point to be NOTED while doing VPC Peering in AWS

 — Conditions/Restriction:

  1. No overlapping IP ranges between 2 Vpcs
  2. No transitive peering.edge routing,IGW,access across, VPCs
  3. No NAT routing between VPCs
  4. Cannot resolve private DNS values across VPCs
  5. No cross- referencing of security groups between VPCs

— Owner of both VPCs need to confirm the peering request.

— The VPCs could be in same or different AWS accounts.

— The VPCs should be in same AWS region.

— We need to update route tables in both VPCs after peering is done.

— The traffic flow between the instances in two peered VPCs happens via private network.

Detail day to day activity's of AWS Admin

EC2

  1. Providing EC2 Instances
  2. Bootstraping EC2 instance while launching
  3. Hardening EC2 instances with security groups to open or close port numbers
  4. Recovering EC2 instance keypair
  5. Modifying instance type in case of demanding more / less resources (CPU/ Memory)
  6. Shutting down unused instances as per customer confirmation
  7. Taking AMI of instances if any activity/ change scheduled

VPC

  1. Creating VPC, Subnet, Route tables, Internet Gateway, NACLs, etc for new environment
  2. Creating public and private subnet.
  3. Creating NAT Instances, NAT gateways
  4. Disabling ports in NACLs
  5. Enabling VPC peering between Test, QA and Prod VPCs
  6. Enabling VPC flow logs to monitor network related issues
  7. Creating and configuring Open VPN server to connect instance security.
  8. Creating new users in open VPN server

ELB/ AutoScaling

  1. Creating ELBs
  2. Requesting SSL Certificate for new domains in certificate manager (ACM)
  3. Configuring SSL Certificate on ELBs
  4. Troubleshooting in case of instance are “outOfservice” in ELB
  5. Enabling and analyzing ELB access log
  6. Creating Launch Configuration and Auto Scaling groups
  7. Adding new LC to ASG when AMI updated.

EBS, S3,EFS, Glacier

  1. Creating New EBS Volumes, modifying existing Volume size or volume type.
  2. Taking Volume snapshots for backup
  3. Copying Volumes from one Availability Zone (AZ) to other AZ if requested.
  4. Migration data from one EC2 instance to others.
  5. Enabling encryption on EBS and S3 bucket objects.
  6. Creating S3 buckets and granting request permission through AMI.
  7. Enabling life cycle policies to transfer data from one storage class to other.
  8. Creating EFS and mounting it in multiple instances.

IAM

  1. Creating IAM Users and granting with minimal permissions.
  2. Generating or modifying IAM policies as per requirement.
  3. Creating roles to access one AWS service with others
  4. Enforcing users to use secure password and MAF.

CloudWatch, CluodTrail, TrustAdvisor

  1. Monitoring instances resource utilization through CloudWatch.
  2. Creating alarms, events and custom matrix in CloudWatch.
  3. Enabling CloudTrail and analyzing logs in case of any events occurred.
  4. Collecting trusted Adviser reports timely manner and analyzing reports for cost optimisation.

Route53

  1. Creating Route53 hosted zones to map with public or private domain.
  2. Creating record sets to map with EC2 instances/ ELBs.
  3. Using routing policies if necessary.
  4. Mapping domain from domain register (like godaddy) to route53.

RDS

  1. Creating RDS Instances as per databases per requirement.
  2. Enabling Multi-AZ, read policies as per demand.
  3. Taking snapshots and restoring from snapshots.

Duties of a System Administrator

 

  1. Account provisioning
  2. Adding and removing hardware // sudo disk -l
  3. Performing backup // tar
  4. Installing & upgrading software // yum
  5. Monitoring the system // top, htop
  6. Troubleshooting // sudo tcmp port 80 or udp port 53
  7. Maintaining Documents ion
  8. Security Vigilance // sudo nmap -sS -o www.google.co.in/24
  9. Other Teams

— Administrator of other operating system
— Network administrator
— DBAs
— Application Developers
— Site reliability and NOC engineer
— Security Ops
— Storage administrator
— Data Center Technicians

=================== ****** ===================

1) Regularly Monitoring and capturing system log files.
2) Regularly Monitoring and capturing data of server CPU, memory usage
3) Regularly monitoring Filesytems space.
4) Regularly monitoring capturing network usage.
5) Writing automated scripts to handle routine user and sysadmin tasks.
6) Users Management and Support
7) Taking backup of Important file systems.
8) Maintaining documentation for system parameters and disk layout.
9) updating with new technologies
10) Educating and training users and subordinates if applicable.

Day to Day activity's of DevOps Engineer

Make sure that the pipeline is running smoothly – This is one of the most important task of a DevOps engineer to make sure that CI/CD pipeline is intact and fixing any issue or failure with it is the #1 priority for the day. They often need to spend time on troubleshooting, analysing and providing fixes to issues.Interaction with other teams – Co-ordination and collaboration is the key for DevOps to be successful and hence daily integration with Dev and QA team, Program management, IT is always required.

Work on Automation Backlog – Automation is soul of DevOps so DevOps engineering need to plan it out and I can see DevOps engineer spending lots of time behind the keyboard working on Automating stuff on daily basis.

Infrastructure Management – DevOps engineer are also responsible for maintaining and managing the infrastructure required for CI/CD pipeline and making sure that its up and running and being used optimally is also part of their daily schedule. Ex. Working on Backup, High Availability, New Platform setup etc.

Dealing with Legacy stuff – Not everyone is lucky to work on latest and newest things and DevOps engineers are no exception hence they also need to spend time on legacy i.e. in terms of supporting it or migrating to the latest.

Exploration – DevOps leverage a lot from the various tools which are available, there are many options as open source so team need to regularly check on this to make sure the adoptions as required, this is something which also require some effort not on daily but regular basis. Ex. What are open source options available to keep the cost at minimum?

Removing bottleneck – DevOps primary purpose is identify the bottlenecks / Manual handshakes and work with everyone involved (Dev / QA and all other stakeholder) to remove them so team spend good amount of time in finding such things and build the Automation Backlog using this. Ex. How we can get builds faster?

Documentation – Though Agile / DevOps stresses less on the documentation, it is still the important one which DevOps engineer does on daily basis, Be it Server Information, Daily Week charted, Scrum / Kanban board or Simple steps to configure / backup or modify the infrastructure, you need to spent good amount of time in coming up these artifacts.

Training and Self Development – Self leaning and Training is very useful in getting better understanding and many organisations encourage their employee to take the time out and do some of these and same holds true for DevOps folks as well, So learn something new everyday...

Continuous Improvement as Practice – Last but not least, It’s up to the DevOps folks to build awareness on the potential of CI/CD and DevOps practices and building a culture of leveraging it for doing things better, reducing re-work, increasing the productivity and optimising the use of existing resources. Go and talk to people to build the DevOps and Continuous Improvement culture...Help design, build, test and configure our next generation containerized micro services platform (Docker)

Day to Day activity's of AWS Admin

  1. Administrating AWS platform services - create, delete, update aws services as per the requests from Dev team.
  2. IAM - create, delete, update users and their permissions.
  3. Support - Giving support to Dev team in case any optimisation/troubleshooting going on for performance of application.
  4. On call/3rd line support - Provide 24x7 support to AWS infrastructure.
  5. Design - Involve in design and development of infrastructure architecture which hosts the applications.
  6. Failover and Backup - Write cloud-formation templates and back scripts/jobs to take time backup of data and infrastructure.
  7. Automation and DevOps - Doing lot of automation for the tasks which are done manually and keep updating them. Working on CI-CD pipelines to provide quick releases.
  8. Setup continuous build environment with Jenkins to speed up deployments.
  9. Creating Cloud based infrastructure required by the Software team, most of these days they create the infrastructure within secure VPC (Virtual Private Cloud) within AWS.
  10. Creating and maintaining scripts to spin up and spin down instances required for application on demand.
  11. Maintaining all the company followed security compliance like passwords, key rotation policies.
  12. Create various types of alerts(also refer as Pager Duty alerts) that monitor systems health and report any issues if they occur and when they happen SysOps team jump on the issues and resolve them by themselves or at least escalate to the right person.
  13. Cost savings/ Cost Optimization is huge part of their activities. They tag and monitor the running instances and anything that is not required to run they not only need to shut them down but also find the root cause why they were not terminated properly in the first place (process issue or software issue).
  14. At last but not the least, work with DEV/STE team to deploy newer version of software in Cloud.
  15. Maintaining a backup of the resources is another important responsibility. The administrator has to perform AWS on-premise resources backup from time to time by making extensive use of AWS services
  16. Managed different AWS Accounts for IAM Users and Implemented Switch Role between them.
  17. Working experience on Linux live and non-live servers to provide and remove the access and privileges to user.

Difference between NACL & Security Group

There are a couple of points to note here :

  1. Network Access control lists are applicable at the subnet level, so any instance in the subnet with an associated NACL will follow rules of NACL. That's not the case with security groups, security groups has to be assigned explicitly to the instance.
  2. By default your default vpc, will have a default Network Access Control List which would allow all traffic , both inbound and outbound. If you want to restrict access at subnet level it's a good practice to create a custom NACL and edit it with the rules of your choice and while editing you could also edit subnet associations , where you associate a subnet with this custom NACL, now any instance in this subnet will start following NACL rules
  3. NACLs are stateless unlike security groups. Security groups are statefull ,if you add an inbound rule say for port 80, it is automatically allowed out, meaning outbound rule for that particular port need not be explicitly added. But in NACLs you need to provide explicit inbound and outbound rules
  4. In security groups you cannot deny traffic from a particular instance, by default everything is denied. You can set rules only to allow. Where as in NACLs you can set rules both to deny and allow. There by denying and allowing only the instances of your choice. You get to take more granular decisions.
  5. Security groups evaluate all the rules in them before allowing a traffic . NACLs do it in the number order, from top to bottom like, if your rule #0 says allow all HTTP traffic and your rule #100 says don't allow HTTP traffic from ip address 10.0.2.156 , it's will not be able to deny the traffic, because rule #0 has already allowed traffic. So it's good practice to have your deny rules first in NACL and followed by allow rules. AWS best practice is to number your rules in increment of 100s in NACL. By deny rules first I mean, specifying narrow deny rules, like for specific ports only. And then write your allow rules.

SSH Agent forwarding

Resolution

To configure SSH agent forwarding for Linux

  1. From your local machine, add your private key to the authentication agent.

    For Linux, use the following command:

    ssh-add -c mykeypair.pem

  2. Connect to your NAT instance using the -A option to enable SSH agent forwarding, for example:

    ssh -A ec2-user@54.0.0.123

Linux boot process

 When Linux OS boot, below are the steps executed:

  • Switch on The system
  • BIOS: POST and gives the control the boot device and MBR
  • MBR Executes the boot loader
  • GRUB Menu list
  • GRUB Loads Kernel and Initrd
  • Initrd: Initial root filesystem with required drivers
  • kernel loads the actual root filesystem
  • /sbin/init is executed
  • checks runlevel in /etc/inittab
  • Runs all the files present in /etc/rcN.d/s
  • Runs /etc/rc.local

runlevels

0 - halt (shut down)
1 - single user mode
2 - multiuser mode , without NFS
3 - full multiuser mode
4 - unused
5 - with GUI
6 - reboot
vim /etc/rc.local -- here you can run custom scripts

Detail explanation

Switch ON the system
BIOS: POST and gives control the Boot Device and loads MBR
BIOS -- Basic Input/output system and present at ROM(Read only memory) which is present on chip-set into the motherboard

First role of BIOS is to perform POST -- Power on self test makes sure hardware to system is working fine the hard disk working fine the keyboard the screen could be accessible it can loads

BIOS provides functionality to load screen drivers
BIOS allows interaction with generic hardware.
BIOS allowed you to boot from hard disk, optical drive, USB, network
BIOS hands over to the first sector of the Disk

--> first sector of the hard disk have the information the how to start booting process
--> first sector consists of Master boot record (MBR)

MBR executes the boot Loader

MBR knows how to boot from Hard disk
In case you have multi boot environment to could be having
GRUB and LILO to install on it

LILO -- LINUX Loader

GRUB -- Grand unified boot loader
both of these provides multi boot environment

GRUB Menu LIST

vim /boot/grub/menu.list
ps -ef | grep init
runlevel
ll /etc/rc1.d
vim /etc/inittab -- default runlevel

Mounting Your Amazon EFS File System Automatically

Resolution

https://docs.aws.amazon.com/efs/latest/ug/mount-fs-auto-mount-onreboot.html

sudo yum install -y nfs-utils

sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-xxxxxxxx.efs.us-east-1.amazonaws.com:/ /home/aesadminuser/doc_repository

Try to give entry in /etc/fstab but when rebooting the server throwing error

mount: unknown filesystem type 'efs'

open /etc/fstab and give the below entry in RHEL 7.x Ver

fs-xxxxxxxx.efs.us-east-1.amazonaws.com:/ /home/aesadminuser/doc_repository nfs4 defaults,_netdev 0 0

It will work