easydeploy.io https://www.easydeploy.io/blog/ A Cloud Architect Company Mon, 15 May 2023 05:34:18 +0000 en-GB hourly 1 https://wordpress.org/?v=6.2.2 https://www.easydeploy.io/blog/wp-content/uploads/2019/12/ED-Icon-150x150.png easydeploy.io https://www.easydeploy.io/blog/ 32 32 How EasyDeploy reduced AWS bills of Arthimpact Finserve Private Limited https://www.easydeploy.io/blog/easydeploy-reduced-aws-bills/?utm_source=rss&utm_medium=rss&utm_campaign=easydeploy-reduced-aws-bills https://www.easydeploy.io/blog/easydeploy-reduced-aws-bills/#respond Wed, 29 Mar 2023 09:56:30 +0000 https://www.easydeploy.io/blog/?p=2660 Arthimpact Finserve Private Limited  is a Fintech company based out of Mumbai India that processes and disburses loan to customers 100% electronically through web application. Challenge Arthimpact Finserve Private Limited uses AWS for hosting their web application. Their cloud infrastructure cost was over 16,000USD/month that they believed their work load should not cost this much, […]

The post How EasyDeploy reduced AWS bills of Arthimpact Finserve Private Limited appeared first on easydeploy.io.

]]>
Arthimpact Finserve Private Limited  is a Fintech company based out of Mumbai India that processes and disburses loan to customers 100% electronically through web application.

Challenge

Arthimpact Finserve Private Limited uses AWS for hosting their web application. Their cloud infrastructure cost was over 16,000USD/month that they believed their work load should not cost this much, their in-house team who is managing their AWS architecture was not qualified enough to manage the complexity of their environment hence they were not sure on how to reduce their AWS bills.

As a technology startup they cannot spend too much on AWS infrastructure cost that would reduce their business profit that may lead to cash crunch, they were in a dire situation to mitigate this issue so that they can concentrate on their applications development which will in turn help them to get a lot of clients. They contacted EasyDeploy seeking help to reduce their AWS bills to maximum extent without compromising the performance of their application as well as their business revenue.

Analysis

Easydeploy Technologies Pvt Ltd carried out the following steps before we proceeded to suggest them the solutions 

  • We analyzed the cost and usage reports for the past 3 months and  have created a spreadsheet of charges for each of the AWS service
  • We analyzed the CPU usage and Memory utilization details of all the EC2 instances and RDS instances
  • We analyzed and found that they are using a lot of unwanted EC2 instances
  • We analyzed and found that they haven’t utilized the Reserved instance feature provided by AWS
  • We analyzed and found that a lot of old AMIs of EC2 and snapshots of RDS were present in their account
  • We analyzed and found there are huge data present in S3 buckets are not Enabled with lifecycle policy
  •  We analyzed and found Cloudwatch logs are not enabled with any retention policy to expire the old and unwanted logs
  • We analyzed and found ECR are not Enabled with lifecycle policy
  • We analyzed the NAT Gateway data charges and saw that there was no private link between S3 and ECR repository.

Solution

We listed out all this details to them and after getting the approval from them we did the following

  • Removed the unwanted EC2 instances
  • Resized the required EC2 instance according to their current usage
  • Purchased Reserved instances for all the required EC2 and RDS instances
  • Removed the old AMI images of EC2 and snapshots of RDS
  • We have set the RDS snapshots only with 15 days of retention as per the Customer requirement
  • We have set the EC2 AMI only with 30 days of retention as per the Customer requirement
  • We have set the Cloudwatch logs retention for 30 days
  • We removed the unwanted data from S3
  • We have set the lifecycle policy for the ECR images in order to remove a large count of untagged images and also for retaining only 30 images
  • We enabled Private link to save NAT gateway bandwidth charges, as mentioned in the link https://www.easydeploy.io/blog/how-to-create-private-link-for-ecr-to-ecs-containers-to-save-nat-gatewayec2-other-charges/ 

Their AWS bills for the month of July, August and September as follows.

 

The AWS bills comparison chart for the above mentioned months is represented below for easy understanding.

Outcome

Arthimpact Finserve Private Limited came to EasyDeploy Technologies Pvt Ltd in August 2020 and their AWS charge for July 2020 billing cycle was $16760 and due to the changes which we have  implemented September 2020’s Bill amount is brought down to $5729 which is approximately 65% reduction in their overall billing.

We at easydeploy.io have more than 7+ years of experience in handling various complex AWS infrastructures. We are also AWS’s Select Tier partner and we have the required qualifications to manage AWS environment to our clients efficiently as per AWS’s standard operating procedure. If you need to save your AWS cost you can contact us through the link.

The post How EasyDeploy reduced AWS bills of Arthimpact Finserve Private Limited appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/easydeploy-reduced-aws-bills/feed/ 0
Enable AWS GuardDuty to detect suspicious activity within your AWS account Using Terraform https://www.easydeploy.io/blog/aws-guardduty-enabling/?utm_source=rss&utm_medium=rss&utm_campaign=aws-guardduty-enabling https://www.easydeploy.io/blog/aws-guardduty-enabling/#respond Mon, 20 Mar 2023 04:39:00 +0000 https://www.easydeploy.io/blog/?p=2623 Introduction AWS GuardDuty is a threat detection service offered by Amazon Web Services (AWS) that continuously monitors and analyzes AWS account activity and network traffic to identify potential security threats. GuardDuty uses machine learning, anomaly detection, and threat intelligence to analyze data from AWS CloudTrail, VPC Flow Logs, and DNS logs, and then generates security […]

The post Enable AWS GuardDuty to detect suspicious activity within your AWS account Using Terraform appeared first on easydeploy.io.

]]>
Introduction

AWS GuardDuty is a threat detection service offered by Amazon Web Services (AWS) that continuously monitors and analyzes AWS account activity and network traffic to identify potential security threats. GuardDuty uses machine learning, anomaly detection, and threat intelligence to analyze data from AWS CloudTrail, VPC Flow Logs, and DNS logs, and then generates security alerts for potential threats, such as unauthorized access, data exfiltration, or malware infections.

GuardDuty provides a centralized dashboard for security operations teams to view and investigate security findings, as well as integrations with other AWS services, such as AWS CloudWatch, AWS Lambda, and AWS Security Hub, for automated response and remediation. By using GuardDuty, AWS customers can improve their security posture and quickly identify and respond to potential security incidents, helping to protect their sensitive data and applications running on AWS.

In this blog, we will explore the key features and benefits of GuardDuty, how to set up and configure the service using terraform script, and best practices for using GuardDuty to improve your AWS security posture. We will also discuss the use cases of GuardDuty.

Prerequisites

An IAM user is attached with the following permissions.

Procedure

Now it’s time to create GuardDuty and some other related services like CloudTrail using Terraform Script. Why need to create CloudTrail? Enabling GuardDuty on CloudTrail logs is essential because it allows customers to gain additional security insights and detect potential security threats in their AWS environment. It is a key step in improving the security posture of an AWS account and protecting valuable data and resources.

And also we are going to create AWS CloudWatch Event Rule and SNS Topic for sending email notifications from GuardDuty logs.

Also read: What is terraform?

Write Terraform Script for create CloudTrail

Create a folder like guard-duty and open VS Code editor in this folder.

Create a file called provider.tf and add the following code into the file.

provider "aws" {
  region     = "region_name"
  access_key = var.access_key
  secret_key = var.secret_key
}

Replace the region_name with the region name where you want to create CloudTrail.

Next, create another file called variables.tf and add the below code.

variable "access_key" {
  type        = string
  description = "AWS IAM Access key"
  default     = ""
}

variable "secret_key" {
  type        = string
  description = "AWS IAM Secret key"
  default     = ""
}

variable "name" {
  type    = string
  default = ""
}

All the variables’ default values need to be given inside the double quotes.

Finally, create a file named main.tf and enter the below code.

data "aws_caller_identity" "this" {}

locals {
  account_id = data.aws_caller_identity.this.account_id
}

resource "aws_cloudtrail" "this" {
  name                          = var.name
  s3_bucket_name                = aws_s3_bucket.this.id
  s3_key_prefix                 = "cloudtrail"
  enable_log_file_validation    = true
  include_global_service_events = true
  is_multi_region_trail         = true
  event_selector {
    read_write_type           = "All"
    include_management_events = true
    data_resource {
      type   = "AWS::S3::Object"
      values = ["arn:aws:s3:::"]
    }
  }
  event_selector {
    read_write_type           = "All"
    include_management_events = true
    data_resource {
      type   = "AWS::DynamoDB::Table"
      values = ["arn:aws:dynamodb"]
    }
    data_resource {
      type   = "AWS::Lambda::Function"
      values = ["arn:aws:lambda"]
    }
  }
  insight_selector {
    insight_type = "ApiCallRateInsight"
  }
}

resource "aws_s3_bucket" "this" {
  bucket        = "${lower(var.name)}-cloudtrail-${local.account_id}"
  force_destroy = true
}

data "aws_iam_policy_document" "bucket_policy" {
  statement {
    sid    = "AWSCloudTrailAclCheck"
    effect = "Allow"
    principals {
      type        = "Service"
      identifiers = ["cloudtrail.amazonaws.com"]
    }
    actions   = ["s3:GetBucketAcl"]
    resources = [aws_s3_bucket.this.arn]
  }
  statement {
    sid    = "AWSCloudTrailWrite"
    effect = "Allow"
    principals {
      type        = "Service"
      identifiers = ["cloudtrail.amazonaws.com"]
    }
    actions   = ["s3:PutObject"]
    resources = ["${aws_s3_bucket.this.arn}/cloudtrail/AWSLogs/${local.account_id}/*"]
    condition {
      test     = "StringEquals"
      variable = "s3:x-amz-acl"
      values   = ["bucket-owner-full-control"]
    }
  }
}

resource "aws_s3_bucket_policy" "this" {
  bucket = aws_s3_bucket.this.id
  policy = data.aws_iam_policy_document.bucket_policy.json
}

The above terraform code will create a CloudTrail with multi-region enabled. And also it creates an AWS S3 Bucket for CloudTrail log storage.

Run Terraform Script for CloudTrail

Now we have to run this script to create CloudTrail and S3 bucket.

Open the terminal in VS code editor and run the “terraform init” command. This init command should be run on every new terraform script.

Know more about terraform Init command

Enable GuardDuty Create CloudTrail using Terraform init

Now run the “terraform apply” command to deploy this script into your AWS account.

It will prompt you to Enter a value and enter yes to create CloudTrail.

It will create 3 resources like the picture below.

Enable GuardDuty Create CloudTrail using Terraform apply

Open your AWS account and navigate to CloudTrail. On the left side, panel choose Dashboard and you can see the CloudTrail could be created.

Enable GuardDuty Create CloudTrail using Terraform Cloudtrail Created

Add Terraform script for Create GuardDuty

Once you successfully created Cloudtrail, now need to enable GuardDuty.

So copy the below code and add it to the main.tf file under the existing code.

resource "aws_guardduty_detector" "this" {
  enable = true
  datasources {
    s3_logs {
      enable = true
    }
    kubernetes {
      audit_logs {
        enable = false
      }
    }
    malware_protection {
      scan_ec2_instance_with_findings {
        ebs_volumes {
          enable = true
        }
      }
    }
  }
}

resource "aws_cloudwatch_event_rule" "this" {
  name        = var.name
  description = "Event rule for trigger sns topic from AWS Guard duty"
  event_pattern = jsonencode(
    {
      "source" : ["aws.guardduty"],
      "detail-type" : ["GuardDuty Finding"]
    }
  )
}

resource "aws_cloudwatch_event_target" "this" {
  rule      = aws_cloudwatch_event_rule.this.name
  target_id = "SendToSNS"
  arn       = aws_sns_topic.this.arn
  input_transformer {
    input_paths = {
      severity            = "$.detail.severity",
      Finding_ID          = "$.detail.id",
      Finding_Type        = "$.detail.type",
      region              = "$.region",
      Finding_description = "$.detail.description"
    }
    input_template = "\"You have a severity <severity> GuardDuty finding type <Finding_Type> in the <region> region.\"\n \"Finding Description:\" \"<Finding_description>. \"\n \"For more details open the GuardDuty console at https://console.aws.amazon.com/guardduty/home?region=<region>#/findings?search=id%3D<Finding_ID>\""
  }
}

resource "aws_sns_topic" "this" {
  name = var.name
}

resource "aws_sns_topic_policy" "this" {
  arn    = aws_sns_topic.this.arn
  policy = data.aws_iam_policy_document.this.json
}

data "aws_iam_policy_document" "this" {
  statement {
    effect  = "Allow"
    actions = ["SNS:Publish"]
    principals {
      type        = "Service"
      identifiers = ["events.amazonaws.com"]
    }
    resources = [aws_sns_topic.this.arn]
  }
}

The above code will enable AWS GuardDuty and also create SNS Topic and Event Rule.

Run the “terraform apply” command to create these resources.

Enable GuardDuty Create CloudTrail using Terraform apply

Once it runs successfully like in the above picture, navigate to the AWS GuardDuty console to see the changes.

Enable GuardDuty Create CloudTrail using Terraform GuardDuty Enabled

Now it is enabled. But there are no logs to show. Now the next step we are going to generate logs to see how it works.

Generate Sample Findings

Now we are going to generate some sample findings.

On the left side navigation panel click Settings. On the right side, scroll down a little and click Generate sample findings.

Enable GuardDuty Create CloudTrail using Terraform Generate Sample Findings

Now again go to the findings page and you can see some of the sample logs are shown.

On the top right side, you can see three colours with indicated numbers.

These colours represent various severity stages of the reports.

  • Blue → Low

  • Orange → Medium

  • Red → High

Enable GuardDuty Create CloudTrail using Terraform Generated Sample Findings

Click one of the sample findings and it will show the full details about the Behavior activity.

Enable GuardDuty Create CloudTrail using Terraform Explore Sample Findings

AWS GuardDuty uses machine learning and mathematical algorithms. So it can find What action occurs and where it happens and Who did it with their location details like the below picture.

Enable GuardDuty Create CloudTrail using Terraform Explore Sample Findings

Create SNS Email Subscription

Open the SNS topic Console and on the left navigation panel click Topics. Then select the topic which is created by Terraform.

Under the Subscriptions section click Create subscription.

Enable GuardDuty Create CloudTrail using Terraform SNS Subscription

Select the protocol to Email and for Endpoint, enter your email address.

Finally, click Create Subscription.

Enable GuardDuty Create CloudTrail using Terraform Create SNS Subscription

You should receive a subscription confirmation email like in the picture below.

Open the mail and click the confirm subscription link.

Enable GuardDuty Create CloudTrail using Terraform Confirm SNS Subscription

If you prompt to another page like the below picture, your email subscription is confirmed.

Enable GuardDuty Create CloudTrail using Terraform SNS Subscription Confirmed

Get Alerts via Email

All setups are completed. But now we just trying to generate reports and get alerts via email notifications.

So first create an S3 bucket for this testing purpose. So Leave all settings as default and create.

If you don’t know how to create an S3 bucket Pls check the below links.

Create S3 bucket from AWS console
Create S3 bucket using Terraform

Now select the newly created S3 bucket and navigate to the Permissions section.

Under the Block public access settings, you can be able to see Block all public access could be On.

Click the Edit button. We are going to off this setting.

Enable GuardDuty Create CloudTrail using Terraform S3 Public Access

Disable the Block all public access and click Save changes like the below screenshot.

Enable GuardDuty Create CloudTrail using Terraform S3 Block public Access Disable

It asks a confirmation. So enter confirm and click Confirm button.

vEnable GuardDuty Create CloudTrail using Terraform S3 Block public Access Disable confirmation

Now go to the GuardDuty page.  After a couple of minutes, there will be a report showing under the Findings section.

Click the report and it will show all the details about the report. It will show a detailed report like what action happens and where it happens and who did this.

vEnable GuardDuty Create CloudTrail using Terraform GuardDuty Findings Report

And also you got an email like the below picture.

vEnable GuardDuty Create CloudTrail using Terraform GuardDuty Alerts via Email

Use Cases of AWS GuardDuty

Here are some of the use cases of AWS GuardDuty:

  1. Continuous Monitoring: GuardDuty continuously monitors the AWS environment for potential security threats, such as unauthorized access, data exfiltration, and malicious activity.
  2. Detecting Compromised Credentials: GuardDuty can monitor your AWS account for unauthorized access attempts and compromised credentials by analyzing AWS CloudTrail logs, VPC Flow Logs, and DNS logs.
  3. Threat Detection: GuardDuty uses machine learning algorithms and threat intelligence to detect known and unknown threats in the AWS environment.
  4. Compliance Monitoring: GuardDuty can help in maintaining compliance with various industry standards, by identifying potential security issues and providing actionable insights to remediate them.
  5. Incident Response: GuardDuty can help in investigating security incidents by providing detailed logs and alerts, which can be used to identify the root cause of the incident and take appropriate remediation measures.
  6. Integration with Other AWS Services: GuardDuty integrates with other AWS services such as AWS CloudTrail, Amazon S3, and AWS Lambda, to provide comprehensive security monitoring and threat detection capabilities.

Conclusion

In conclusion, creating AWS GuardDuty using Terraform is a straightforward process that can significantly enhance the security posture of your AWS environment. With the ability to detect and respond to potential threats in real time, GuardDuty offers a valuable layer of security that can help protect your business from cyber-attacks. By leveraging the power of Infrastructure as Code (IaC) with Terraform, you can automate the process of setting up GuardDuty, enabling you to quickly and easily configure the service and scale it to meet the needs of your organization.

With AWS GuardDuty and Terraform, you can rest assured that your AWS environment is secure and protected and that you are well-equipped to respond to any potential security threats that may arise. So why not give it a try and see the benefits for yourself?

 

The post Enable AWS GuardDuty to detect suspicious activity within your AWS account Using Terraform appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/aws-guardduty-enabling/feed/ 0
How to Enable Proactive Insights with Amazon DevOps Guru for Amazon RDS https://www.easydeploy.io/blog/enabling-amazon-devops-guru-for-amazon-rds/?utm_source=rss&utm_medium=rss&utm_campaign=enabling-amazon-devops-guru-for-amazon-rds https://www.easydeploy.io/blog/enabling-amazon-devops-guru-for-amazon-rds/#respond Mon, 20 Mar 2023 03:58:02 +0000 https://www.easydeploy.io/blog/?p=2601 Introduction Amazon DevOps Guru for RDS is a fully managed service that helps to analyze the performance of the Amazon Aurora MySQL-Compatible and PostgreSQL-Compatible engines. You only pay for what you use in DevOps Guru. DevOps Guru for RDs relies on Machine Learning(ML) and advanced mathematical formulas. So it easily detects the problems before it […]

The post How to Enable Proactive Insights with Amazon DevOps Guru for Amazon RDS appeared first on easydeploy.io.

]]>
Introduction

Amazon DevOps Guru for RDS is a fully managed service that helps to analyze the performance of the Amazon Aurora MySQL-Compatible and PostgreSQL-Compatible engines. You only pay for what you use in DevOps Guru.

DevOps Guru for RDs relies on Machine Learning(ML) and advanced mathematical formulas. So it easily detects the problems before it gets more worse and also it gives solutions for the problems by analyzing the problems using machine learning feature.

proactive insight lets you know about problematic behavior before it occurs. It contains anomalies with recommendations and related metrics to help you address issues in your Aurora databases before becoming bigger problems.

Learn more about Proactive insight

In this blog we are going to know about to add DevOps Guru with Amazon Aurora RDS cluster.

Enable DevOps Guru at Region Level

First, we need to enable DevOps Guru at the regional level in our AWS Account. Go to the Amazon DevOps Guru page and click the Get Started link.

Enable Amazon DevOps Guru For RDS

  • First Choose the Monitor applications in the current AWS account choice. If you want to monitor across your organization’s level choose the first one.

  • DevOps Guru will create a new IAM role to evaluate your AWS resources.

  • For Amazon DevOps Guru analysis coverage, click the Choose later. We can configure it later. Until then It won’t analyze any of the resources and no cost occurs.

  • Click Enable button.

Enable Amazon DevOps Guru For RDS in region

Create RDS with DevOps Guru Enabled

Go to the RDS console and click Dashboard followed by Create database.

Enable Amazon DevOps Guru For RDS Create Database with DevOps Guru

To choose a database creation method, select Standard create.

For Engine options choose any one of the Aurora engines. Because right now DevOps Guru supports only Amazon Aurora Mysql-Compatible and PostgreSQL-Compatible engines.

Enable Amazon DevOps Guru For RDS Create Database with DevOps Guru Choose Type

Provide DB cluster name and username and password for database.

Read now: Upgrade MySQL version at minimal downtime

Enable Amazon DevOps Guru For RDS Create Database with DevOps Guru Enter Details

Under Instance configuration, for the DB instance class choose any class. But for type, only the g-type instance supports DevOps Guru. So, choose any g-type instances for your database cluster.

For Availability & durability, choose Don’t create an Aurora Replica for this test purpose.

Enable Amazon DevOps Guru For RDS Create Database with DevOps Guru Instance Type

  • Under the Monitoring section, Turn on Performance Insights is enabled by default. So just leave it as default. If it doesn’t enable you can’t able to enable DevOps Guru.

  • Then enable Turn on DevOps Guru.

  • For the Tag key, we need to add tags for our database. DevOps Guru will use these tags to identify the resource to be analyzed.

  • The most important thing is, the Tag key prefix should be like devops-guru-” . This isn’t case-sensitive. For example “devops-guru-default” or “DevOps-Guru-test”.

  • For Tag value, we can give anything. For now, we just leave it as default.

  • Fill in All the other details, and click Create Database button.

Enable Amazon DevOps Guru For RDS Create Database with DevOps Guru Enable Performance Insights and DevOps Guru

Wait until the database Status to Available.

Enable Amazon DevOps Guru For RDS Create Database with DevOps Guru Database Status

Now open the Amazon DevOps Guru, and you can able to see the database cluster and database instance under the System health overview section.

Now our DevOps Guru setup is ready. If any problems occur in the database DevOps Guru monitors via Performance insights. DevOps Guru analyzes the problem before it gets worse and it gives possible solutions also.

Enable Amazon DevOps Guru For RDS DevOps Guru Dashboard with RDS cluster

Enable DevOps Guru in Existing RDS Cluster

If you want to enable DevOps Guru in a existing Aurora cluster, follow the below steps to enable it.

  • In the left side panel of the RDS console click the Performance Insights link.

  • Select the Database instance which you have to enable DevOps Guru.

  • You could able to see the DevOps Guru for RDS toggle in the below screenshot. Or there will be a notification banner for Turn on DevOps Guru for RDS. You can click any one of them to enable it.

Enable Amazon DevOps Guru For RDS Turn On DevOps Guru in Existing RDS

Add tags as I mentioned in earlier steps and click Turn on DevOps Guru button.

Enable Amazon DevOps Guru For RDS Turn On DevOps Guru in Existing RDS add Tags

Now navigate to the Amazon DevOps Guru Dashboard and you can able to see your database will be shown there like the below picture.

Enable Amazon DevOps Guru For RDS Turn On DevOps Guru in Existing RDS Instance

Add notifications in DevOps Guru

In the left navigation panel of the DevOps Guru console click the Current account section.

Under Settings Notifications, Click Edit button to add notifications.

Enable Amazon DevOps Guru For RDS Add Notification in DevOps Guru

Click the Add SNS topic button.

Enable Amazon DevOps Guru For RDS Add Notification in DevOps Guru Add SNS Topic

  • To choose an SNS notification topic, select Create a new SNS topic for create a new topic. If you want to use an existing topic, select the first choice.
  • Then enter a name for the new SNS topic.
  • under the Notification configuration, choose anything for your convenience and click Save.

Enable Amazon DevOps Guru For RDS Add Notification in DevOps Guru Create New SNS Topic

Conclusion

All sets well. From now DevOps Guru will able to monitor your database by notifying you about potential operational issues before they become bigger problems. Using Amazon DevOps Guru you will pay for only what you use. Performance Insights only collect database metrics and not give any alerts if anything down. So DevOps Guru helps to alert with more detail about the problem and also gives possible solutions for them. So the problem solving time reduces from days to minutes. DevOps Guru feature is one of the best monitoring feature for Amazon Aurora RDS cluster.

Also Read: Stop and Start RDS Automatically

I hope you enjoyed reading this article… See you again in a new one.

The post How to Enable Proactive Insights with Amazon DevOps Guru for Amazon RDS appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/enabling-amazon-devops-guru-for-amazon-rds/feed/ 0
How to Stop and Start RDS Instance Automatically using System Manager to save AWS bill https://www.easydeploy.io/blog/automate-rds-instance-using-system-manager/?utm_source=rss&utm_medium=rss&utm_campaign=automate-rds-instance-using-system-manager https://www.easydeploy.io/blog/automate-rds-instance-using-system-manager/#respond Mon, 27 Feb 2023 12:17:39 +0000 https://www.easydeploy.io/blog/?p=2591 Introduction AWS provides a fully managed Database service called RDS. What does mean the word fully managed. It has every things which are required to run a database in cloud and also we don’t need to worry about licensing. But there are some things that customers should be manage like availability of the database, choosing […]

The post How to Stop and Start RDS Instance Automatically using System Manager to save AWS bill appeared first on easydeploy.io.

]]>
Introduction

AWS provides a fully managed Database service called RDS. What does mean the word fully managed. It has every things which are required to run a database in cloud and also we don’t need to worry about licensing. But there are some things that customers should be manage like availability of the database, choosing the right size of database engine, maintaining backups, cost optimization and so on.

If you want to know how to upgrade the RDS database with minimal downtime feel free to look the following links to know more.

Here we are going to discuss about Cost optimization with the run time of a database. For example, If you want to run your database instance only a certain time like 9 am to 9 pm. But manually start the database in morning 9 am and stop it in evening 9 pm is boring, right. Some times you forget to start the database and it will lead the application downtime in the working hours.

But if you setup an automation process which is stop and start the RDS database daily, at the time which you specified. Do you know how to do that? Don’t worry… Here is a solution that will give you a very simple end-to-end process.

AWS provides a service called System Manager, which is we are gonna use to stop and start our database. Let’s get into the details now.

Create IAM role and Policy for System Manager

  • In the AWS IAM console click create role.

  • Select the Trust entity type to AWS service and for Use cases, search for system manager and select it like the picture below.

  • Then click Next button.

Stop and Start RDS Instance Automatically using System Manager select service

Now Click on the Create policy button to create a new IAM policy.

Stop and Start RDS Instance Automatically using System Manager create policy

Copy the below code and paste it under the JSON editor and click Next:Tags followed by click Next button.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "rds:Describe*",
                "rds:Start*",
                "rds:Stop*",
                "rds:Reboot*"
            ],
            "Resource": "*"
        }
    ]
}

Stop and Start RDS Instance Automatically using System Manager Add Permissions

  • On the Policy Review page, Enter a name for your IAM policy like ‘StopStartRebootRDS’.

  • Then click Create policy.

Stop and Start RDS Instance Automatically using System Manager Create policy

  • Now navigate to the IAM Role page and click the refresh button.

  • And then search for stopstart and choose the IAM policy that you have created and click Next.

Stop and Start RDS Instance Automatically using System Manager Add policy

In the review page Enter role name like ‘StopStartRebootRDS’.

Stop and Start RDS Instance Automatically using System Manager Add Role Name

Scroll down to the bottom and you can able to see the attached permissions. Finally click Create role.

Stop and Start RDS Instance Automatically using System Manager Create Role

Create SSM Association for Stop RDS Instance

  • Navigate to the Systems Manager console and in the left side click the State Manager link under the Node management section.

  • Now click the Create an Automation button.

  • For Name enter StopInstanceRDS.

    Stop and Start RDS Instance Automatically using System Manager State Manager

  • Under Document section, search ‘AWS-Stop’ and choose the ‘AWS-StopRdsInstance’.

Stop and Start RDS Instance Automatically using System Manager State Manager Select Document

  • For execution choose Simple execution.

  • Under Input Parameters section, for InstanceId, you have to provide the name of the RDS instance which you want to stop and start periodically.

  • For AutomationAssumeRole, select the role which we have created the beginning of this blog.

Stop and Start RDS Instance Automatically using System Manager State Manager Select Execution

  • Under the Specify schedule section, Choose On Schedule.

  • For Specify with choose CRON schedule builder and followed by Daily. if you want to customize the timing for hourly you can select the Hourly type.

  • Enter the time when you want to trigger this manager to stop the database. This is using UTC time. So you have to provide the time in UTC.

  • Click the Apply association only at the next specified cron interval. otherwise it will triggered once this manager gets created.

  • Finally click the Create association button.

Stop and Start RDS Instance Automatically using System Manager State Manager Schedule to stop

Create SSM Association for Start RDS Instance

  • Now we have to create an Association document for Start RDS Instance.

  • For Name enter ‘StartInstanceRDS’.

  • Under Document section, search ‘AWS-Start’ and choose the ‘AWS-StartRdsInstance’.

Stop and Start RDS Instance Automatically using System Manager State Manager Select Document to start

  • For execution choose Simple execution.

  • Under Input Parameters section, for InstanceId, you have to provide the name of the RDS instance which you want to stop and start periodically.

  • For AutomationAssumeRole, select that StopStartRebootRDS role.

Stop and Start RDS Instance Automatically using System Manager State Manager Select Execution to start

  • Under the Specify schedule section, Choose On Schedule.

  • For Specify with choose CRON schedule builder and followed by Daily.

  • Enter the time in UTC when you want to trigger this manager to start the database.

  • Click the Apply association only at the next specified cron interval.

  • Finally click the Create association button.

Stop and Start RDS Instance Automatically using System Manager State Manager Schedule to start

Now you have successfully created Associations for Stop and Start RDS instance.

Stop and Start RDS Instance Automatically using System Manager Associations created

You can check the execution details. For this choose any on of the association and click the Execution history section.

Stop and Start RDS Instance Automatically using System Manager Association executions

Conclusion

Using this simple process will help you to automate your database run timings. And it saves the cost of the databases in dev and test environments when they’re not in use, thereby leading to a compute cost saving. However, keep in mind that although we’re stopping the databases, the storage costs for the databases still apply.

I hope this article is helps you to learn a new thing. If you have any doubts or needs any clarifications, feel free to comment it to below article. We will help you to clarify.

The post How to Stop and Start RDS Instance Automatically using System Manager to save AWS bill appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/automate-rds-instance-using-system-manager/feed/ 0
Multi-Region Server Migration using AWS Application Migration Service (MGN) https://www.easydeploy.io/blog/multi-region-server-migration-using-aws-application-migration-service-mgn/?utm_source=rss&utm_medium=rss&utm_campaign=multi-region-server-migration-using-aws-application-migration-service-mgn https://www.easydeploy.io/blog/multi-region-server-migration-using-aws-application-migration-service-mgn/#respond Thu, 12 Jan 2023 12:25:44 +0000 https://www.easydeploy.io/blog/?p=2548 Introduction In the past two blog posts, information was provided on how to utilize the Database Migration Service (DMS) to upgrade a database. Upgrade MySQL database version with DMS Upgrade Aurora PostgreSQL database version with DMS In this article we are going to cover a new AWS service called MGN – Application Migration Service. What is the purpose of the MGN? […]

The post Multi-Region Server Migration using AWS Application Migration Service (MGN) appeared first on easydeploy.io.

]]>
Introduction

In the past two blog posts, information was provided on how to utilize the Database Migration Service (DMS) to upgrade a database.

In this article we are going to cover a new AWS service called MGN – Application Migration Service. What is the purpose of the MGN? Using DMS we can migrate our database to multi-regions or multi-accounts. But MGN is helps to migrate our application servers running in EC2 instances form one region to other region.

This is really helps to migrate our application to multi-region for host our application to multiple geolocations and also during this process there is no downtime in our application. MGN helps to replicate our servers to multiple regions.

Prerequisites

  • An application or just an Apache server running on EC2 instance.

Create IAM user

Go to AWS console and navigate to IAM -> Users and click Add user.

Enter an User name and for Select AWS access type, select Access key – Programmatic access, and click  Next: Permissions button.

Under Set permissions, Choose Attach existing policies directly.

Search for applicationmigration, and Choose AWSApplicationMigrationAgenPolicy, and click Next: Tags.

Review the user details and click Create user.

It will show the Access key and Secret access key for the created user. Copy the credentials and save it in somewhere else.

Add Source Servers in MGN in Target Region

Navigate to Application Migration Service in the Target Region.

For this scenario ap-south-1 is my target region. Here what I mean as target region is the region where we need to migrate our application.

On the left side navigation, Choose Source servers and click Add servers.

Select the operating system that your application is currently running the source region. 

For IAM access key Id and IAM secret access key, provide the credentials which create the starting point of this demo.

Copy the 2 commands like the screenshot below and save it in a notepad.

SSH into your source server and Run the 2 commands which we get from the previous step.

Once the MGN agent installation is successful, You can able to see the Source server inside the MGN dashboard in Target region.

Select the source server and navigate into Migration dashboard. The Lifecycle status is Not ready.

Wait until the Data replication process  to complete.

Modify Launch Template for Server

Edit General launch settings

Go to Launch settings and click Edit button, which is inside the General launch setting.

Choose Off for Instance type right sizing and leave all others as default and click Save settings.

Edit EC2 Launch Template

Now under the Launch settings, Click Modify for EC2 Launch Template.

Click Modify, it will open a new page of EC2 Launch Templates.

For AMI, choose Don’t include in launch template.

For Instance type, choose Manually select instance type, then choose any type of the instance that you want. For this case we can choose t2.micro.

  • Under the Firewall (security groups), choose Create security group.
  • Provide Security group name and Description for security group.
  • Then click Add rule, for Type choose ssh and Source type choose Anywhere.
  • Again click Add rule, for Type choose HTTP and Source type choose Anywhere.

Expand advanced network configuration and for Auto-assign public IP, choose Enable.

All done for launch template. Click Create template version.

Choose the Launch template which we are edited and click Actions and click Set default version.

Choose the latest version for Template version and click Set as default version.

Why we do this because, every time we edit launch template it will create a new version. And the latest version of the template isn’t the default version of the template. Only the template’s default version is taken for creating instance. So whenever changes made in the template it is necessary to change the default version to latest.

Testing the Migration in Target Region

Launch Test Instance

Go to MGN page and choose the source server. Once the Migration lifecycle is shows Ready for testing like the below picture, choose Test and cutover and under Testing Click Launch test instances.

After that the Migration lifecycle shows Test in progress.  Click the View job details like the below image.

You can able to see the job logs. And the Status also in Pending state.

Validate the launch of test instance by confirming the following details like the screenshot below:

  • Alerts column = Launched
  • Migration lifecycle column = Test in progress

Once the instance is successfully launched, go to the EC2 dashboard and choose the test instance.

Under the Details section you can see the Public IPv4 address.

Copy the address and paste in browser and validate that your application is successfully migrated into the test instance.

So the test instance complete the migration successfully.

Remove Test Instance

Now need to delete the test instance. Select the source server and click Test and cutover and click Mark as “Ready for cutover”.

It asks you a confirmation about to delete the test instance. Click yes and click Continue.

 Validate the status of termination job and cutover readiness:

  • Migration Lifecycle = Ready for cutover

Create Migrated Server Instance in target Region

Select source server and click Test and cutover and gain click Launch cutover instances under Cutover section.

Monitor the indicators to validate the success of the launch of your Cutover instance like the below screenshot:

  • Alerts  = Launched
  • Migration lifecycle column = Cutover in progress

Go to EC2 and choose the instance which is created by MGN.

Copy the Public IPv4 address of the instance and paste it in browser for verify the migration of your application.

For me it is looks like the below picture.

If your website is shows perfectly, that means your application is migrated successfully between multi-regions.

Go to AWS MGN and chose the source server and click Test and cutover and click Finalize cutover under Cutover section.

Once you click Finalize cutover, the following things has changed like the below picture.

  • Migration lifecycle = Cutover complete
  • Data replication status = Disconnected
  • Next step = Mark as archived

Final Step

Now the continuous replication from the Source server to target server in target region is disconnected.  So you can delete the Server in the source region and point the new server in the Target region to your DNS.

Using  this MGN service we can do this migration process between regions without any downtime in our application.

I hope you enjoyed reading this article. See you soon in another one.

The post Multi-Region Server Migration using AWS Application Migration Service (MGN) appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/multi-region-server-migration-using-aws-application-migration-service-mgn/feed/ 0
How upgrade Amazon Aurora PostgreSQL latest version with zero to minimal downtime using AWS DMS https://www.easydeploy.io/blog/upgrade-amazon-aurora-postgresql-latest-version/?utm_source=rss&utm_medium=rss&utm_campaign=upgrade-amazon-aurora-postgresql-latest-version https://www.easydeploy.io/blog/upgrade-amazon-aurora-postgresql-latest-version/#respond Thu, 12 Jan 2023 12:06:38 +0000 https://www.easydeploy.io/blog/?p=2507 In our previous blog, we could see how to upgrade the RDS MySQL version to the latest version with zero downtime using AWS DMS. In this blog, we are going to discuss upgrading the Amazon Aurora PostgreSQL version from 10.21 to the latest version(for now 14.5) using DMS. We just follow the same but little […]

The post How upgrade Amazon Aurora PostgreSQL latest version with zero to minimal downtime using AWS DMS appeared first on easydeploy.io.

]]>
In our previous blog, we could see how to upgrade the RDS MySQL version to the latest version with zero downtime using AWS DMS.

In this blog, we are going to discuss upgrading the Amazon Aurora PostgreSQL version from 10.21 to the latest version(for now 14.5) using DMS.

We just follow the same but little different procedure from the previous blog to complete this requirement.

Create Parameter Groups

Open your AWS console and search for RDS and click it.

On the left side of the RDS page, you can see parameter groups. Enter the section and Click Create parameter group button in the top right corner of the page.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Create parameter group

  • First, we have to create a parameter group for aurora PostgreSQL version 10 for an old version of the database.
  • Choose the Parameter group family to aurora-postgresql10.
  • Enter the Group name to cluster-aurora-postgresql10 or whatever you want to provide at your convenience.
  • Then provide a description of the parameter group then click Create.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Create parameter group

Now select the parameter group you just created, and click Parameter group actions -> Edit.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit parameter group

Search for the following parameters and change their values of them with the below values like the screenshots below.

  • rds.logical_replication        = 1
  • wal_sender_timeout           = 0
  • rds.log_retention_period   = 7200
  • max_replication_slots        = 20
  • session_replication_role    = replica

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit Logical Replication

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit Wal Sender Timeout

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit Log Retention Period

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit Max Replication Slots

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit Session Replication Role

Once completed all changes in the parameters click the Preview changes button like the above picture to view the changes.

You can see the changes in the parameters, click Save changes.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Preview Changes

Create another parameter group for Aurora PostgreSQL version 14 for the latest version.

  • Choose the Parameter group family to aurora-postgresql14.
  • Enter the Group name to cluster-aurora-postgresql14 or whatever you want to provide at your convenience.
  • Then provide a description of the parameter group then click Create.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Create Parameter Group Latest version

Select the Parameter group for the latest PostgreSQL version and click Parameter group actions -> Edit.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit Parameter Group Latest version

Change the following parameters’ values and click Save changes.

  • rds.logical_replication        = 1
  • wal_sender_timeout           = 0
  • rds.log_retention_period   = 7200
  • max_replication_slots        = 20

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Preview CHnages Parameter Group Latest version

Create Aurora PostgreSQL DB Clusters

Aurora PostgreSQL DB Cluster with version 10.21

On the left navigation section under the Amazon RDS Click the Databases section followed by Create database.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Create Source Database

Select the standard create option. For the Engine choose the Amazon Aurora engine.

For Edition choose Amazon Aurora PostgreSQL-Compatible Edition.

And for the Available versions, choose an old version that is available at the time when you working. For now, we select PostgreSQL 10.21.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Choose Engine Source Database

For Templates choose the Free tier section. It will reduce the cost for creating a database. Here we are going to create a database for testing purposes only. So free tier is enough for now.

Under Settings, for DB instance identifier, provide a name like source-cluster for your database instance.

For master username enter postgres and Master password provide a password for your database username and type the password again for confirmation.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Settings Source Database

Under the Instance configuration, for DB Instance class select Burstable classes and choose the least size for this demo.

For Availability & durability choose Don’t create an Aurora Replica.

  • If you choose the other one it will create a new reader instance in a different availability zone for scalable and high availability. It will cost higher than the first option.
  • In production scenarios, we should follow the Aurora Replica creation method for high availability. But for now, just run with a single instance.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Configurations Source Database

  • For the Connectivity section, first, select the Don’t connect to an EC2 compute resource option. We can configure it later.
  • Next, the network-type chooses IPv4, and VPC, and choose any VPC you want to use. For myself, I choose the default one.
  • For the DB subnet group select default.
  • For Public access, choose yes. Then only we can connect with our database from outside of the VPC.
  • Then for the Security group select the Create new option and provide a Name for it.
  • And choose any of the availability zones that you want.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Connectivity Source Database

Scroll down and expand the Additional configuration section.

For the Initial database name, enter a name for the database.

Under the DB parameter group choose the parameter group which you create at the beginning of this tutorial for Aurora PostgreSQL version 10.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Select Parameter Group Source Database

Leave all other things as default, then click Create database.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Default Source Database

Now the source database cluster is created with a database instance.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Creating Source Database Cluster

Aurora PostgreSQL DB Cluster with the latest version

Again we are going to create a PostgreSQL database cluster with the latest version. So click Create database.

For Engine type, choose Amazon Aurora.

For Edition choose Amazon Aurora PostgreSQL-Compatible Edition.

And for the Available versions, choose the latest version which is available at the time when you working. For now, we select PostgreSQL 14.5.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Choose Engine Target Database

Choose Dev/Test for Templates section.

Under Settings, fill in all the things like the below screenshot.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Settings Target Database

For instance class choose Burstable classes for this demo purpose and for the Multi-AZ deployment, choose Don’t create an Aurora Replica.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS System Config Target Database

Choose a VPC which you select for the old version database cluster for VPC. Follow the same for the DB Subnet group.

Click Yes for Public access.

For the VPC security group, click Choose existing and choose the security group which is created by the previous database.

And finally, for the Availability zone, choose the same availability zone in which you created your old version database cluster.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Connectivity Target Database

Under the Additional configuration, Enter a database name for Initial database name and for the DB cluster parameter group, choose the parameter group you created earlier for the latest PostgreSQL version.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Parameter Group Target Database

Leave all others as default and click Create database.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Create Target Database

Now the second DB cluster with a DB instance also creating. Let them complete the creation.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Creating Source and Target Databases

Modify the Security group, Inbound Rule

In the meantime, let’s modify the Inbound rule in the security group of our database.

Go to the VPC section in the AWS console and select Security groups in the left navigation section.

Choose the security group that you created with the database and select Inbound rules and click Edit inbound rules.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit DB Security Group

Change the Source to Anywhere and add ‘0.0.0.0/0′ like the picture below. Then click Save rules.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Edit DB Security Group Inbound Rule

Create Table in Source Database

Navigate to the RDS Databases section and you can see all the DB clusters and the DB Instances are created successfully.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Source DB status Available

Select the Source DB Cluster, under the Configuration section, You can see the Engine version. So you can confirm this Db cluster is the older version of PostgreSQL.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Source DB Configuration

Select the Source DB Cluster, under the Connectivity & security section, you can find the endpoints of your DB Cluster.

Copy the endpoint for the Writer instance and note it somewhere else.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Source DB Endpoints

Connect to your EC2 instance via SSH connection and log in as a root user.

This Instance is should be in the same VPC as this DB cluster.

Then run the following command with your source DB Cluster Endpoint.

psql --host=<source_db_endpoint> --port=5432 --username=<db_username> --password --dbname=<db_name>

It prompts for a password for the db user.

Enter the following commands to create a table called ‘COMPANY‘ and insert a row of data into the table.

CREATE TABLE COMPANY(
ID INT PRIMARY KEY NOT NULL,
NAME TEXT NOT NULL,
AGE INT NOT NULL,
ADDRESS CHAR(50),
SALARY REAL
);

INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY) VALUES (1, 'Paul', 32, 'California', 20000.00);

Run the ‘\d‘ command to list the tables inside the database.

Run the below command to see the contents of the table.

select * from COMPANY;

Run the command ‘exit;‘ to exit from the database.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Login to Source PostgreSQL DB Create Table

Now select the target DB cluster, and under the Configuration section, you can find the database version. For this scenario latest version of PostgreSQL is 14.5.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Target DB Configuration

Under the Connectivity & security section, copy the Endpoint for the type of Writer Instance.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Target DB Endpoints

Login into the target database with the following command.

For target_db_endpoint, enter the endpoint name of the target cluster.

psql --host=<target_db_endpoint> --port=5432 --username=<db_username> --password --dbname=<db_name>

Run the ‘\d‘ command to see the list of tables. You cannot able to find any table in the target database. Because right now we don’t creates any tables here.

So run the ‘exit‘ command to log out from the target database.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Login to Target PostgreSQL DB

Database Migration Service

Now we are going to migrate the Source Database to the Target database using DMS.

We now follow the same procedure as the previous blog.

Create Replication Instance

Click the below link and follow the process to create a Replication Instance for Database Migration Process.

Link for creating replication instance

For the creation of the replication instance, time will take around 10 to 15 minutes. Wait until the creation is completed.

Create Source and Target Endpoints

Once the replication instance is created click the Endpoints in the left navigation section and then click Create endpoint.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Create Endpoints

For endpoint type choose Source endpoint and check right for Select RDS DB instance.

For RDS Instance, choose the source DB cluster with the old PostgreSQL version.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Choose Source Endpoint

  • Under the Endpoint configuration, for Endpoint identifier and Source engine are automatically filled with its values.
  • For Access to endpoint database, choose to Provide access information manually.
  • Enter only the Password for the database. All other things are automatically filled.
  • Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Source Endpoint ConfigurationScroll down to the bottom and expand the Test Endpoint connection.
  • choose the same VPC that you using in this demo and choose the Replication instance that you created in the previous step.
  • Click the Run test button to check the connection with the database.
  • If the Status is showing successful like the below picture, Click the Create endpoint button.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Source Endpoint Test Connection

Endpoint for the source database is now Active.

Now click Create endpoint to create the target endpoint for the target database.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Source Endpoint Created

For endpoint type choose Target endpoint and check right for Select RDS DB instance.

For RDS Instance, choose the target DB cluster with the latest PostgreSQL version.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Choose Target Endpoint

  • Under the Endpoint configuration, for Endpoint identifier and target engine are automatically filled with its values.
  • For Access to endpoint database, choose to Provide access information manually.
  • Enter only the Password for the database. All other things are automatically filled.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Target Endpoint Configuration

  • Scroll down to the bottom and expand the Test Endpoint connection.
  • choose VPC and Replication instance.
  • Click Run test and if the connection was successful, Click the Create endpoint button.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Target Endpoint Test Connection

All two endpoints are created successfully and in Active status.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Created Source and Target Endpoints

Create Database Migration Task

Now it’s time to create a Database migration task.

Navigate to the Database migration tasks section and click Create database migration task.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Creating Database Migration Task

  • For Task identifier, provide a name for the task.
  • Choose the replication instance which we created for the Replication instance.
  • For the Source database endpoint, choose the endpoint you have created for the source and for the Target database endpoint, choose the endpoint you’ve created for the target.
  • For the Migration type, choose Migrate existing data and replicate ongoing changes.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Database Migration Task Configuration

Scroll down and under the Task settings, for the Editing mode choose Wizard.

Change the following and keep the default values to the remaining options:

  1. Target table operation mode – Truncate
  2. Turn on validation – Enable
  3. Turn on Cloudwatch Logs – Enable

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Database Migration Task Settings

  • Scroll down under the Table mappings and choose Wizard for Editing mode.
  • Expand Selection rules and Click Add new selection rule.
  • For Schema select Enter a schema.
  • Leave all other things as default. So all tables inside the source database will be migrated to the target database.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Database Migration Table Mappings

Scroll down to the bottom and click Create task.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Database Migration Create Task

The migration task gets created. Wait until it shows Running.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Database Migration Task Creating

Once the creation is completed, you can able to see the Status of the task with Load compile, replication ongoing.

Click the Table statistics, you can see the Load state is like the Table completed.

So the Database migration process is successfully done.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Database Migration Task Table Statistics

To check, the migration, log in to the target database and run the following command to see if the contents are migrated to this database from the source.

\d
select * from COMPANY;

You can see the table is created and the contents are also stored in the target database.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Check Migration in Target DB

Check the Replication

In Source Database

First login to the source database and run the following command to add some new rows to the company table.

INSERT INTO COMPANY VALUES (2, 'Arun', 28, 'New York', 20000.00), (3, 'Abishek', 36, 'Chicago', 42000.00), (4, 'Sarath', 26, 'Los Angeles', 25000.00), (5, 'Xavier', 42, 'San Francisco', 39000.00);

Run the below command to see the add rows inside the table.

select * from COMPANY;
exit;

As you can see the table contains a total of 5 rows. Now check whether it will be replicated in the target database or not.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Adding Rows Inside Source Database

In Target Database

Login into the target database and run the below command to see the replication.

select * from COMPANY;
exit;

You can see that now the table in the target database also has 5 rows.

So all the contents that will add or remove from the source database will also replicate in the target database also.

Upgrade Aurora PostgreSQL latest version with 0 Downtime using DMS Check Replication in Target Database

Final Step

To this point whenever changes or updates are made from the old version source database is parallelly replicated to the secondary database which is the latest PostgreSQL version. But we need to set up the latest PostgreSQL version database as the primary one. For this just point to the upgraded database as the primary database inside the application code. Now the application will use the database which was upgraded to the latest version. Then delete the old Aurora PostgreSQL version database. 

We are successfully upgrading the RDS Aurora PostgreSQL database to the latest PostgreSQL version with nearly zero downtime. During this process, there will no downtime in your application.

That’s all I want to show you. See you in the next article. Thank you.

The post How upgrade Amazon Aurora PostgreSQL latest version with zero to minimal downtime using AWS DMS appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/upgrade-amazon-aurora-postgresql-latest-version/feed/ 0
How to setup Memory (RAM) and diskspace monitor for EC2 instance in AWS CloudWatch https://www.easydeploy.io/blog/setup-ram-diskspace-ec2-instance-in-aws-cloudwatch/?utm_source=rss&utm_medium=rss&utm_campaign=setup-ram-diskspace-ec2-instance-in-aws-cloudwatch https://www.easydeploy.io/blog/setup-ram-diskspace-ec2-instance-in-aws-cloudwatch/#respond Thu, 12 Jan 2023 11:32:20 +0000 https://www.easydeploy.io/blog/?p=2316 Monitoring the performance of your EC2 instances in AWS is crucial for ensuring that your applications run smoothly. One important aspect of monitoring is keeping an eye on the memory (RAM) and disk space usage of your instances. In this blog post, we will show you how to set up memory and disk space monitoring […]

The post How to setup Memory (RAM) and diskspace monitor for EC2 instance in AWS CloudWatch appeared first on easydeploy.io.

]]>
Monitoring the performance of your EC2 instances in AWS is crucial for ensuring that your applications run smoothly. One important aspect of monitoring is keeping an eye on the memory (RAM) and disk space usage of your instances. In this blog post, we will show you how to set up memory and disk space monitoring for your EC2 instances using AWS CloudWatch.

By following these steps, you will be able to track the usage of these resources in real-time and receive alerts when usage exceeds a certain threshold. This will help you identify potential issues and take action before they become critical, ensuring that your applications continue to perform at their best.

Monitoring RAM and diskspace will prevent from server down & server crash so that now we are going to learn how to monitor RAM and Diskspace for our EC2 instance in a very simple way.

Supported systems

This monitoring scripts will be supported on the below Linux flavours

  • Amazon Linux 2

  • Amazon Linux AMI 2014.09.2 and later

  • Red Hat Enterprise Linux 6.9 and 7.4

  • SUSE Linux Enterprise Server 12

  • Ubuntu Server 14.04 and 16.04

Create an IAM role for the EC2 instance

Go to AWS console, open IAM service → under roles → click create role

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch role

In the trusted entity type, make sure you are in AWS service then select EC2 under use case and click next

aws serviceEC2next

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch use case

Now click create a policy to write our custom policy with cloud watch permissions

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch create policy

click JSON to write code

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch json

Just remove the existing lines under the JSON section

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch remove json

And paste the following JSON code under the JSON section and click next → next

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics",
"cloudwatch:PutMetricData",
"ec2:DescribeTags"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch json code

Now give the name for our new policy and then click create policy

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch policy name

Go back to our previous tab, just refresh the page and search for your newly created policy by entering the name, which was you given in the previous step

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch search policy

Click Enter, select our policy, and click next

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch select policy

Just give the name for our role and click create role

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch role name
setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch create role

Now we have successfully created a new role for our EC2 instance with cloud watch permission

Attach the role to our EC2 instance

Open the EC2 service, select your instance → click Actions

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch ec2 action

Click security → modify IAM role

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch modify IAM

Now just select our role and click update IAM role

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch select IAM

Install the required packages on the EC2 instance

for Amazon Linux 2 :

sudo yum install -y perl-Switch perl-DateTime perl-Sys-Syslog perl-LWP-Protocol-https perl-Digest-SHA.x86_64

for Ubuntu :

sudo apt-get update

sudo apt-get install unzip

sudo apt-get install libwww-perl libdatetime-perl

for Red Hat Enterprise Linux 7 :

sudo yum install perl-Switch perl-DateTime perl-Sys-Syslog perl-LWP-Protocol-https perl-Digest-SHA --enablerepo="rhui-REGION-rhel-server-optional" -y 

sudo yum install zip unzip

for SUSE :

In SUSE Linux Enterprise Server 12, you need to download the perl-Switch package

wget http://download.opensuse.org/repositories/devel:/languages:/perl/SLE_12_SP3/noarch/perl-Switch-2.17-32.1.noarch.rpm

sudo rpm -i perl-Switch-2.17-32.1.noarch.rpm

Now install packages :

sudo zypper install perl-Switch perl-DateTime

sudo zypper install –y "perl(LWP::Protocol::https)"

Install monitoring scripts on EC2

To install the monitoring script, use the below command

curl https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip -O

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch install script

unzip and remove the monitoring script zip file

unzip CloudWatchMonitoringScripts-1.2.2.zip && rm CloudWatchMonitoringScripts-1.2.2.zip

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch remove zip file

Change the directory to aws-scripts-mon you can see the below script files

cd aws-scripts-mon
ls

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch list

When we run the “ mon-put-instance-data.pl “ script file, it will send memory and disk space reports to the cloud watch

Schedule cron for monitoring script

if we schedule cron it will automatically run our monitoring script each time as we mentioned in the crontab, and our script will send report to cloud watch.

open crontab using below command

crontab -e

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch crontab

Now crontab file will be opened, just replace your script path with the below command and then copy & paste the command in our crontab file,

and then save & exit.

*/1 * * * * <path-to-script>/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --disk-space-util --disk-space-used --disk-space-avail --disk-path=/

*/1 * * * * /home/ec2-user/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --disk-space-util --disk-space-used --disk-space-avail --disk-path=/

If we give cron like this “ */1 * * * * ” it will run our script at every single minute

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch crontab file

Crontab Issue

If you are facing “Command not found” issue when using crontab , then follow the below steps to rectify this error.

Install cron

yum install cronie cronie-anacron

Now check using “crontab -e” command , it will works

Setup Cloud Watch metrics

Go to cloud watch service → Dashboards → create dashboard

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch create dashboard

Enter the name for our dashboard and then click create dashboard

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch dashboard name

Select the type of widget you want

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch add widget

Select system/linux metrics under custom namespace

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch namespace

click “ Filesystem, InstanceId, MountPath “

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch metrics

Copy your instance id and filter out your instance by search

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch instance id

Filter and select all the three metrics of your instance and click create widget

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch instance metrics

Enter the Gauge range 0 – 100 and click create widget

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch gauge range
Now under Dashboards → click our newly created dashboard , and then click the “ + “ symbol at the right corner to add the balance metrics
setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch disk

Again select type of widgetsystem/linux metrics → click Instanceid

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch metrics

Filter out using our instance id and select all the three metrics of your instance and click create widget

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch select instance metrics

Enter the Gauge range 0 – 1000 and click create widget

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch gauge range

Now change the custom time to 1 minute , then only all the changes in our instance RAM & memory usage will be reflect in our dashboard and then → click save

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch save

We have successfully done !! our cloud watch monitoring setup for Memory(RAM) & Disk space , here after we can easily monitor our EC2 instance’s memory and disk usages.

You can check if our cron is working or not , by using below command

cat /var/log/cron

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch cron command

setup Memory(RAM) and diskspace monitor for EC2 instance in AWS CloudWatch cron logs

yeah!! our cron is working properly .

The post How to setup Memory (RAM) and diskspace monitor for EC2 instance in AWS CloudWatch appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/setup-ram-diskspace-ec2-instance-in-aws-cloudwatch/feed/ 0
How to upgrade MySQL version in RDS with zero to minimal downtime using AWS DMS https://www.easydeploy.io/blog/upgrade-mysql-version-rds-using-aws-dms/?utm_source=rss&utm_medium=rss&utm_campaign=upgrade-mysql-version-rds-using-aws-dms https://www.easydeploy.io/blog/upgrade-mysql-version-rds-using-aws-dms/#respond Thu, 22 Dec 2022 10:18:43 +0000 https://www.easydeploy.io/blog/?p=2358 Introduction We can easily Upgrade our AWS MySQL RDS Database Instance from MySQL 5.7 to 8.0 with just modify the DB Instance version. But when you modify your DB instance’s version, it takes the modification time minimum of 15 mins to 4 hours or higher. In real time scenarios it will lead to downtime in […]

The post How to upgrade MySQL version in RDS with zero to minimal downtime using AWS DMS appeared first on easydeploy.io.

]]>
Introduction
  • We can easily Upgrade our AWS MySQL RDS Database Instance from MySQL 5.7 to 8.0 with just modify the DB Instance version. But when you modify your DB instance’s version, it takes the modification time minimum of 15 mins to 4 hours or higher. In real time scenarios it will lead to downtime in our live application at the time of DB upgrading. 
  • Nobody likes interruptions. Here I come with a solution that will give you almost Zero downtime when upgrading your MySQL RDS Database Instance.
  • In this article I will show you how to upgrade your MySQL RDS Database Instance version from 5.7 to 8.0 or Latest version with Zero downtime using DMS (Data Migration Service).
  • DMS is provided by AWS which helps to Migrate your database from On-Premise, DB in EC2 Instance or RDS to the same or any of the types that mentioned above. Let’s get started…

Create Parameter Groups

Open your AWS console and search for RDS and click it.

On the left side of the RDS page you can see parameter groups. Click it again click Create parameter group button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime create Pg

  • Choose the Parameter group family to mysql5.7.
  • Enter Group name to mysql-57 or whatever you want to provide as your convenience.
  • Then provide a description about the parameter group then click Create.

 

Also Learn: How to setup RDS auto scaling in AWS 

 

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime create Parameter group

You can see the parameter group could be created.

Select the newly created group click Parameter group actions followed by Edit.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Edit Parameter group 5.7

Search for binlog_format and change the value to ROW like the picture below.

Click the Save changes button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Modify Parameter group 5.7

Create another parameter group with the Parameter group family mysql8.0.

Group name mysql-80 and give a description and click Create.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Edit Parameter group 8.0

Then Edit the newly created mysql-80 parameter group and search for binlog_format and change the value to ROW then finally click Save changes.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Modify Parameter group 8.0

Create RDS Database Instance with MySQL 5.7

On the left navigation section under the Amazon RDS Click the Databases section followed by Create database.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Database

Select the standard create option. For Engine choose MySQL engine.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Database Engine

For MySQL version choose 5.7.34. Because first we are going to create an older version of MySQL.

For Templates choose the Free tier section. It will reduce the cost for creating a database. Here we are going to create a database for testing purposes only. So free tier is enough for now.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Database Engine version

Under Settings, for DB instance identifier, provide a name for your database instance.

For master username enter admin and Master password provide a password for your database username and type the password again for confirmation.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Database DB Settings

Under the Instance configuration, for DB Instance class select the least size for this demo.

For storage type choose gp2 and select 10 for Allocated storage.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Database Instance Config

For the Connectivity section, first select Don’t connect to an EC2 compute resource option. We can configure it later.

Next the network-type choose IPv4, and VPC, choose any VPC as you want to use. For myself I choose the default one.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Database Connectivity

  • For the DB subnet group select default.
  • For Public access, choose yes. Then only we can connect with our database from outside of the VPC.
  • Then for the Security group select the Create new option and provide a Name for it.
  • And choose any of the availability zone that you want.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Database DB Network

Scroll down and expand the Additional configuration section.

Under the DB parameter group choose the parameter group which you create from the beginning of this tutorial for MySQL version 5.7.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Database Addtional options

Scroll down to the last, click Create database.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Database

Now your database is being created. It will take around 10 minutes.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Creating Database

In the meantime we have to modify the security group which is created by the RDS database.

Go to the VPC section followed by security groups section search for the security group which is created by the RDS.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Security Group

Select the security group and select inbound rules.

Click Edit inbound rules button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Select inbound rules

You can see a single inbound rule with port range 3306 for mysql database connection.

For source choose Anywhere and provide 0.0.0.0/0 IP range like the picture below. Then click the save rules button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Edit inbound rules

Create Tables and add Contents

After 10 minutes your MySQL RDS database is successfully created and its status will be Available.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime DB Available

Choose the database that you created now and under the Connectivity & security section you can find the Endpoint of your database.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime DB Endpoint

Connect to your EC2 instance via SSH connection and login as a root user.

Then run the following command with your main MySQL database Endpoint.

mysql -h <your_main_database_endpoint> -P 3306 -u admin -p

It asks for a password and provide the password for the admin user.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime DB Login

Now first we are going to create a database inside of MySQL database using the following command. 

CREATE DATABASE <database_name>;

Then create a table with the name Persons.

CREATE TABLE <database_name>.Persons (
PersonID int,
LastName varchar(255),
FirstName varchar(255),
Gender varchar(255)
);

Now we have to inject a detail of a person using the following command.

INSERT INTO <database_name>.Persons
VALUES ('0023', 'Holland', 'Tom', 'male');

Run the following command to view the contents of the table.

SELECT * FROM <database_name>.Persons;

It will show the table like the below screenshot.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime DB Create table and Insert

Create another RDS DB Instance from the Primary Instance.

Select the main MySQL RDS database.

Select Modify and click Restore to point in time.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Modify Main Database

Select the Restore time to the latest restorable time.

For DB engine select MySQL Community Edition and for DB instance identifier provide a name for your restoring database instance.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Restore Database Point In Time

  • For instance configuration leaves it as default.
  • For Multi AZ deployment under Availability & durability, for testing purpose select do not create a standby instance
  • In a real time production environment choosing to create a standby instance is a recommended way.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Restore Database Point In Time

Scroll down to the bottom. Leave all the other things as default.

Click the Restore to point in time button to create a new database instance.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Restore Database Point In Time Create

Now you can see a new restored database instance is being created.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Restore Database Point In Time Creating

Once the restored DB instance is created you can see the Endpoint of the restored database under the Connectivity & security.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Restore Database Point In Time Created

Using the new Endpoint of the restored database instance run the following command in your EC2 instance to connect with the restored database.

mysql -h <restored_database_endpoint> -P 3306 -u admin -p

Run ‘SHOW DATABASES;‘ command and you can see the database which is created in the primary database.

Run the following command to see whether the table which we created in the primary database is replicated in this restored database.

SELECT * FROM <database_name>.Persons;

If your output is like the below picture, your restored database instance is successfully restored from the primary database instance.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Restore Database Point In Time Checking

Database Migration Service

  • Now we are going to create a replication from the primary database instance to restored or secondary database instance.
  • Here what is replication means, whenever a changes are made from primary database, they will immediately replicated to the secondary database.
  • Since this is a homogeneous database migration (the source and target database engines are the same). 

Create Replication Instance

  • Navigate to the DMS page in AWS console and on the left side panel under the Migrate data section click Replication instances.
  • Replication instances help to create a connection between the source and target database instances to transfer the data.
  • Click the Create replication instance button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Replication Instance

Enter a name to your replication instance.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Replication Instances

  • For instance configuration choose a least type of instance class for this tutorial purpose only.
  • For the Engine version, choose the latest version of DMS.
  • For the Multi-AZ, choose Dev or test workload for testing purpose.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Replication Instance Engine Version

For VPC select the VPC which you select for your primary database instance and the same as the subnet group.

For now select the public accessible to enable.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Replication Instance Connectivity & Security

  • For the availability zone, choose the one where you created your primary database instance. The Database instances and replication instances should be in the same availability zone.
  • For VPC security groups choose the security group that we created earlier.
  • Click the create replication instance button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Replication Instance Advanced Settings and Create

Replication instances take around 15 to 20 minutes to create.

Once it is created successfully it shows Available status like the picture below.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Created Replication Instance

Create Source and Target Endpoints

On the left side panel under the Replication instances choose Endpoints.

Then click the Create endpoint button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Create Endpoints

  • First we are going to create a Source endpoint. So choose Source endpoint for Endpoint type.
  • Check the Select RDS DB instance box.
  • Choose the Primary RDS DB Instance.
  • Under endpoint configuration, for the Endpoint identifier give a name like the below picture.
  • Choose the source engine as MySQL.Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Choose Endpoint Type

Choose Provide access information manually for Access to the endpoint database.

And the Server name, Port and the user name are filled default and for password, enter the password for the user.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Provide Endpoint Details

  • Expand the Test endpoint section, choose the VPC which you use for this demo, and choose the replication instance that you created earlier.
  • Click Run test to test the connection to the database.
  • If the connection was successful it will show like the picture below.
  • Finally click Create endpoint button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Test Source Endpoint Connection

Now your source endpoint is in Active status.

Click Create endpoint button to create target endpoint.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Source Endpoint Created

  • Choose the Endpoint type as Target endpoint and check the box for Select RDS DB instance.
  • Select restored database instance as target RDS instance.
  • For Endpoint identifier, enter a name for the target endpoint  and choose the target engine as MySQL.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Target Endpoint Create

Choose Provide access information manually for Access to the endpoint database.

And the Server name, Port and the user name are filled default with the restored database details and for password, enter the password for the user.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Target Endpoint details

  • Expand the Test endpoint section, choose the VPC and the replication instance.
  • Click Run test to test the connection to the restored database.
  • If the connection was successful it will show like the picture below.
  • Finally click Create endpoint button

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Test Target Endpoint Connection

Now our two endpoints are Active.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Source and target Endpoints

Create Database Migration Task

Now login to your primary MySQL RDS Database and run the following commands to insert user details to the table and list out the table.

 

INSERT INTO <database_name>.Persons
VALUES ('0024', 'Maguire', 'Toby', 'male');

SELECT * FROM <database_name>.Persons;

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Insert User Details into Table in Primary DB

Login to the restored database and run the following command to see the changes made from the primary database are replicated to this database.

SELECT * FROM <database_name>.Persons;

You are not able to see the changes in the restored database.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Check Replication In Secondary DB

  • Because the changes are not replicated to secondary database.
  • For this we need to create a Database migration task.
  • So click the Create database migration task button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Database Migration Task

  • For Task identifier, provide a name for the task.
  • Choose the replication instance which we created for the Replication instance.
  • For Source database endpoint, choose the endpoint you have created for source and for the Target database endpoint, choose the endpoint you’ve created for target.
  • For the Migration type, choose Migrate existing data and replicate ongoing changes.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Database Migration Task Configuration

Scroll down and under the Task settings, for the Editing mode choose Wizard.

Change the followings and keep the default values to the remaining options:

  1. Target table operation mode – Do nothing
  2. Turn on validation – Enable
  3. Turn on Cloudwatch Logs – Enable

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Database Migration Task Settings

  • Scroll down and expand Selection rules and Click Add new selection rule.
  • For Schema select Enter a schema.
  • For source name Enter the database name which you want to replicate to the target database.
  • For my scenario I use the test as a database which is going to be replicated.
  • For source table name the % symbol indicates all the tables inside the source database. So leave it as default.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Database Migration Task Schema Settings

Scroll down to the bottom and click Create task.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Database Migration Task Create task

Now your Database migration task will be in starting status.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Database Migration Task Creating

Once its status is shown like the below screenshot, go to the table statistics section.

You are able to see the Table Persons is updated successfully.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime DMS Table Statistics

So login to your restored database and run the following command to see the updated status.

SELECT * FROM <database_name>.Persons;

You can see the second name with the column PersonID 24 will replicated from the primary database.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime List All in Table in Secondary DB

Check the Replication

In Primary Database

Now login to the primary database and run the following commands to add a row inside the Persons table.

INSERT INTO <database_name>.Persons
VALUES ('0030', 'Olsen', 'Elizabeth', 'female');

SELECT * FROM <database_name>.Persons;

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Insert Row in Table in Primary DB

In Restored Database

  • Now login to the secondary database with its endpoint and run the following command to see the replication status.
SELECT * FROM <database_name>.Persons;
  • You can now see the replication from the primary database to the restored database is successfully working.
  • So from now whatever changes made from the primary database is replicated to the restored database immediately with the delay of milliseconds.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime List Users in Table in Secondary DB

Upgrade the RDS Restored Database instance to latest version

  • Now select the restored RDS database instance and click Modify.
  • For the DB engine version choose the latest version at the time you are working.
  • For currently 8.0.31 is the latest version for MySQL.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime List Modify RDS DB

Under the Additional Configuration, for the DB parameter group choose the one that you created from the beginning of this tutorial for version 8.0.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime List Modify RDS DB Parameter group

Leave all other things as default and click Continue.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime List Modify RDS DB Continue

Under the Schedule modification section, for When to apply modifications, choose Apply immediately.

Finally click the Modify DB instance button.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime List Modify RDS DB Instance

Now your restored RDS database is upgrading status.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime List Modifying RDS DB Instance

In the meantime, login to your primary database and run the following commands to insert a row of data into the Persons table.

INSERT INTO <database_name>.Persons
VALUES ('0035', 'Evans', 'Cris', 'male');

SELECT * FROM <database_name>.Persons;

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime List Insert a Row into Primary DB

  • Wait around 10 minutes to upgrade the database.
  • Once the upgrade is completed, Select the restored database and select the Configuration section.
  • You can see the engine version is 8.0.31. So it is successfully upgraded.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime RDS DB Successfully Upgraded

  • Now we need to check if the table modification from the primary database is replicated into the restored upgraded database.
  • So, login to the secondary database and run the following command.
SELECT * FROM <database_name>.Persons;
  • Now you are able to see the table is updated with all details from the primary database.

Upgrade MySQL 5.7 RDS DB Instance to Latest Version with Zero Downtime Login to Secondary RDS DB

We successfully completed to upgraded RDS database with the latest version.

Final Step

To this point whenever changes or updates are made from the old version primary database are parallelly replicated to the secondary database which is upgraded to the latest MySQL version. But we need to set up the latest MySQL version database as the primary one. For this just point the Secondary database as the primary database inside the application code. Now the application will use the database which was upgraded to the latest version. Then delete the old MySQL version database. 

Yeah! All good…

We are successfully upgrading the RDS database to the latest MySQL version with nearly zero downtime. Yes, During this whole process there is no downtime for your application.

I hope you all enjoy this article.

The post How to upgrade MySQL version in RDS with zero to minimal downtime using AWS DMS appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/upgrade-mysql-version-rds-using-aws-dms/feed/ 0
How to increase an EBS volume in Amazon EC2 instance https://www.easydeploy.io/blog/ebs-volume-ec2-instance/?utm_source=rss&utm_medium=rss&utm_campaign=ebs-volume-ec2-instance https://www.easydeploy.io/blog/ebs-volume-ec2-instance/#respond Thu, 22 Dec 2022 09:49:55 +0000 https://www.easydeploy.io/blog/?p=2137 Introduction Elastic Block Storage known as EBS is a storage type in AWS. It is a block level storage type, we cannot access any files inside the EBS volume from AWS console. EBS volumes can be attached with EC2 instances. We can access files from SSH connection with its attached EC2 Instance. We can increase […]

The post How to increase an EBS volume in Amazon EC2 instance appeared first on easydeploy.io.

]]>
Introduction

Elastic Block Storage known as EBS is a storage type in AWS. It is a block level storage type, we cannot access any files inside the EBS volume from AWS console. EBS volumes can be attached with EC2 instances. We can access files from SSH connection with its attached EC2 Instance. We can increase or decrease the EBS volume size at any time. But whenever you want to resize the volume, make sure to take a copy of that volume. This is a best practice incase of any failure in the resizing process, you can recover your data anytime.

Now we are going to Increase the size of the EBS volume which is attached with an EC2 Instance.

Resize EBS Volume

  • First Select the EC2 instance which is attached with the EBS volume that we need to increase size.
  • Select the Storage section and you can find the attached volumes with the EC2 Instance. Click the volume.

increase-ebs-volume-with-ec2-select volume

It will navigate you to the volumes section. Then select the volume, then click Actions followed by Modify volume.

increase-ebs-volume-with-ec2-modify-volume

Enter the number of size you want to upgrade the volume. Here I entered 20GiB, then click Modify.

Increase Ebs Volume With Ec2 Modify Volume Size

It asks you for confirmation to modify the volume. Click Modify to continue.

Increase Ebs Volume With Ec2 Modify Volume Size Confirmation

Now the volume state is optimizing. It will take a couple of minutes.

Increase Ebs Volume With Ec2 Modifying state Volume Size

Wait until the Volume state is In-use. Now the volume is successfully upgraded to 20GiB.

Increase Ebs Volume With Ec2 Modified state Volume Size

Update the Resize EBS with EC2

  • Login to your EC2 Instance with sudo privileges.
  • Type the ‘lsblk’ command in the terminal window.
  • In the below picture you can see our volume has been extended to 20GiB. But the primary partition is still the old volume’s size.

Increase Ebs Volume With Ec2 List Volumes

To expand the partition use the following command like the screenshot below.

growpart /dev/xdva 1

Increase Ebs Volume With Ec2 change Volume

Now again run the ‘lsblk’ command to see the changes made.

Increase Ebs Volume With Ec2 List Volumes again

  • From the above screenshot our primary partition is successfully extended to new volume size.
  • Now we need to check the size of the file system.
  • So run the following command: ‘df -h’
  • You can notice that it is still with the old 8GiB size.

Increase Ebs Volume With Ec2 List Volumes With New One

  • Run the following command to see the type of the file system that we used.
 file -s /dev/xvd*
  • In Amazon Linux 2 EC2 instances’ file system is XFS like the picture below.

Increase Ebs Volume With Ec2 View File Type

For the XFS file system use the following command to extend the file system.

xfs_growfs -d /

Increase Ebs Volume With Ec2 Execute XFS CommandA

Now again run the ‘df -h’ command to see the change happens.

Increase Ebs Volume With Ec2 List Modified Volumes

As you can see the above picture shows our file system successfully upgraded to new volume size.

Now the EBS volume is fully resized from 8GiB to 20GiB and ready to used.

In this process there is no downtime on our server.

 

The post How to increase an EBS volume in Amazon EC2 instance appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/ebs-volume-ec2-instance/feed/ 0
How to Clone another repository data for Bitbucket Pipeline https://www.easydeploy.io/blog/clone-bitbucket-repository-using-bitbucket-pipeline/?utm_source=rss&utm_medium=rss&utm_campaign=clone-bitbucket-repository-using-bitbucket-pipeline https://www.easydeploy.io/blog/clone-bitbucket-repository-using-bitbucket-pipeline/#respond Thu, 22 Dec 2022 09:33:51 +0000 https://www.easydeploy.io/blog/?p=2141 Bitbucket Pipeline is a continuous integration and continuous deployment (CI/CD) service, It helps you to automatically build, test and even deploy your code based on a configuration file in your repository. In this blog , we are going to explore how to clone a Bitbucket repository in Bitbucket pipeline in simple way, just follow the […]

The post How to Clone another repository data for Bitbucket Pipeline appeared first on easydeploy.io.

]]>
Bitbucket Pipeline is a continuous integration and continuous deployment (CI/CD) service, It helps you to automatically build, test and even deploy your code based on a configuration file in your repository.

In this blog , we are going to explore how to clone a Bitbucket repository in Bitbucket pipeline in simple way, just follow the below commands

Generate SSH key in pipeline repository

SSH keys are pair of public and private keys that are used to authenticate and establish an encrypted communication between a client and a remote machine over the internet.

Open your pipeline Bitbucket repository → click Repository settings

Clone a Bitbucket repository using Bitbucket Pipeline repository settings

 

Also Learn:  How bitbucket pipeline triggers only when changes made in a particular folder

 

Scroll down and click SSH keys under pipelines section

Clone a Bitbucket repository using Bitbucket Pipeline ssh key

Now you can see a popup message like “pipelines must be enabled

Clone a Bitbucket repository using Bitbucket Pipeline setting(pipeline)

Click “go to settings” you will see the enable option like below screenshot and then enable the pipeline

Clone a Bitbucket repository using Bitbucket Pipeline enable setting(pipeline)

Now we have successfully enabled the pipeline !

Clone a Bitbucket repository using Bitbucket Pipeline enabled

Now scroll down and click the same SSH keys option under the pipelines section → click Generate keys

Clone a Bitbucket repository using Bitbucket Pipeline generate keys

Now it will automatically generate private & public key , then copy the public key for future reference, we want to add this public key in our source repository

Clone a Bitbucket repository using Bitbucket Pipeline public key

Now open your source repository(which will contain your codes/files) → Repository settings → click Access keys under security section

Clone a Bitbucket repository using Bitbucket Pipeline access key

Now click Add key

Clone a Bitbucket repository using Bitbucket Pipeline add key

Now Enter the label name → paste our public key under key section → click Add SSH key

Clone a Bitbucket repository using Bitbucket Pipeline public key added

We have successfully added our Bitbucket pipeline repository SSH public key to our source repository

Create pipeline

Now again open your pipeline repository → click pipelines → and then click create your first pipeline

Clone a Bitbucket repository using Bitbucket Pipeline first pipeline

 

You will redirected to the content like below screen shot then click select on Started pipeline

Clone a Bitbucket repository using Bitbucket Pipeline select

Now you will redirected to a bitbucket-pipelines.yml file

Clone a Bitbucket repository using Bitbucket Pipeline edit file

Just remove entire content below “image: atlassian/default-image:3” line ,

Open your source repository and click clone option

Clone a Bitbucket repository using Bitbucket Pipeline clone

 

By default it will be in HTTPS option, but now we are going to use SSH option , so change the option from HTTPS to SSH by clicking the dropdown list icon

Clone a Bitbucket repository using Bitbucket Pipeline https

Switching to SSH and then copy the git clone URL , we will use this URL in next step, inside bitbucket-pipelines.yml file

Clone a Bitbucket repository using Bitbucket Pipeline ssh

Now replace the 5th line with your source repository SSH URL , which was already copied in previous step

pipelines:
  default:
    - step:
        script:
          - git clone git@bitbucket.org:baskey/source_repo.git
          - ls

Copy the above script and then paste it in pipeline repository ,under the “image: atlassian/default-image:3” line , inside the bitbucket-pipelines.yml file , like below screenshot and click commit file

It will help us to clone another repository in our pipeline and by using “ls” we can view the cloned files

Clone a Bitbucket repository using Bitbucket Pipeline update

 

Now you will see the success status within few seconds

Clone a Bitbucket repository using Bitbucket Pipeline success

Just click the successful status, you will redirected to appropriate page and then by clicking each command you can see the list of process have done by the pipeline setup like below screen shot

Clone a Bitbucket repository using Bitbucket Pipeline review

The post How to Clone another repository data for Bitbucket Pipeline appeared first on easydeploy.io.

]]>
https://www.easydeploy.io/blog/clone-bitbucket-repository-using-bitbucket-pipeline/feed/ 0