Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How to protect AWS EKS from regional disasters using Kasten10

How to Protect AWS EKS from Regional Disasters using Kasten10

Abstract :

In this article, we will discuss how you can protect EKS (Objects configuration and Persistent Volume EBS) by Kasten10 from regional disasters. We will create two EKS clusters (us-east-1, us-east-2) and regularly backup the objects configurations of primary EKS to S3 and make an EBS snapshot for persistent volume, so that secondary EKS can import and restore the backup.

Contents:

  1. Introduction:
  2. Overview of AWS EKS:
  3. Overview of Kasten10:
  4. Environment preparation:
  5. Kasten10 Installation:
  6. Backup and restore Ghost application:
  7. Conclusion

Prerequisites :

We assume that the reader has basic knowledge of kubernetes ,Helm, AWS.

1. Introduction:

Kubernetes backup refers to the process of creating a copy of the Kubernetes resources and data to protect against data loss and to ensure business continuity. Backing up Kubernetes resources, such as deployments, statefulsets, and services, is critical to ensure that your applications can be quickly restored in case of a catastrophic failure.

There are several Kubernetes backup tools available, including open-source solutions like Velero and commercial solutions from vendors like Kasten10 by Veeam. These tools provide an easy and efficient way to backup and restore Kubernetes resources and data.

Kubernetes backup can be performed at the cluster level, namespace level, or even at the resource level. This provides granular control over the backup process and enables you to create backups that meet specific business requirements.

When implementing Kubernetes backup, it is important to consider factors such as the frequency and scope of backups, recovery point objectives (RPOs), and recovery time objectives (RTOs). Testing backups regularly is also critical to ensure that they can be successfully restored in case of a failure.

To deploy Kasten10 on AWS, users can choose from a variety of deployment options, including Amazon EKS, self-managed Kubernetes clusters, and managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS) and Amazon EKS Anywhere. Regardless of the deployment option, Kasten10 provides a seamless experience for backup and recovery, with support for various storage providers, including Amazon S3, Amazon EBS, and Amazon EFS.

2. Overview of AWS EKS

AWS EKS (Elastic Kubernetes Service) is a fully managed service that allows you to easily run, scale, and manage Kubernetes clusters on AWS. Kubernetes is an open-source platform for container orchestration that is widely used for deploying and managing containerized applications.

With AWS EKS, you can quickly provision a Kubernetes cluster in a few simple steps, and the service takes care of the underlying infrastructure and management tasks, such as scaling, patching, and upgrading the cluster. This means you can focus on deploying and managing your applications, rather than worrying about the underlying infrastructure.

AWS EKS integrates with other AWS services, such as Amazon Elastic Container Registry (ECR) for storing and managing container images, and AWS Identity and Access Management (IAM) for managing access to your Kubernetes resources. Additionally, EKS provides a number of built-in integrations with other AWS services and third-party tools, such as AWS CloudFormation for infrastructure as code and Grafana for monitoring and observability.

3. Overview of Kasten10

The K10 data management platform, purpose-built for Kubernetes, provides enterprise operations teams an easy-to-use, scalable, and secure system for backup/restore, disaster recovery, and mobility of Kubernetes applications.

K10’s application-centric approach and deep integrations with relational and NoSQL databases, K10 provides a native Kubernetes API and includes features such as full spectrum consistency, database integrations, automatic application discovery, multi-cloud mobility, and a powerful web-based user interface.

Kasten user interface

Deploying this Quick Start for a new virtual private cloud (VPC) with default parameters builds the following K10 platform in the AWS Cloud. The diagram shows three Availability Zones, leveraging multiple AWS services.

More detailed K10 architecture diagram is shown below.

Kasten K10 provides application backup and mobility capabilities with the following tenets:

  • Create scalable and resilient backups. Kasten K10 integrates with the Amazon S3 (and other target stores) so that your applications can be stored as a true backup in a fault-domain that is separated from primary storage and has the cost efficiencies to afford long term retention. The data efficiently transferred by K10 using techniques like dedup and change-block-tracking.
  • Seamless Migration: The ability to move an application across clusters is an extremely powerful feature that enables a variety of use cases including Disaster Recovery (DR), Test/Dev with realistic data sets, and performance testing in isolated environments. In particular, the K10 platform is built to support application migration and mobility in a variety of different and overlapping contexts:
1. Cross-Namespace
2. Cross-Cluster
3. Cross-Account: (e.g., AWS accounts, Google Cloud projects)
4. Cross-Region: (e.g., US-East-1 to US-East-2)
5. Cross-Cloud: (e.g., Azure to AWS)
  • Treat the application as the operational unit. This balances the needs of operations and development teams in cloud-native environments. Kasten’s data management solution works with an entire application and not just the infrastructure or storage layers. This allows your operations team to scale by ensuring business policy compliance at the application level instead of having to think about the hundreds of components that make up a modern app. At the same time, working with the application gives your developers power and control when needed without slowing them down.

4. Environment preparation :

Our target is to install kasten10 on two EKS as below:

All the instructions are in Linux if you are using Mac or Windows please check out the provided links with each step.
  • AWS CLI version 2. See Installing, updating, and uninstalling the AWS CLI version 2.
curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o “awscliv2.zip” unzip awscliv2.zip sudo ./aws/install
  • Install eksctl on your desktop machine: See Installing or upgrading eksctl for another OS
curl — silent — location “https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz” | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
  • Helm. See Installing Helm.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
  • kubectl. See Installing kubectl.

4.2 Create two EKS Clusters as below :

Two EKS clusters in the same AWS account. See Creating an EKS Cluster. (This blog post was tested with EKS running Kubernetes version 1.24)

The two clusters will be referred to as the Primary{pk} and Recovery {rk}clusters.

Configure all the required environments as below:

REGION=us-east-1
REGION2=us-east-2
BUCKET={Name of AWS S3}
PRIMARY_EKS=pk
RECOVERY_EKS=rk
PRIMARY_CONTEXT=pkc
RECOVERY_CONTEXT=rkc
ACCOUNT=$(aws sts get-caller-identity --query Account --output text)

eksctl create cluster --name=$PRIMARY_EKS --nodes=3 --node-type=t3.medium --region $REGION
eksctl create cluster --name=$RECOVERY_EKS --nodes=3 --node-type=t3.medium --region $REGION2
#Add two contexts to your .kube file so you can deal with them easily
#For easier management of kubectl config, we add our clusters to kubeconfig with an alias:

aws eks --region $REGION update-kubeconfig --name $PRIMARY_EKS --alias $PRIMARY_CONTEXT
aws eks --region $REGION update-kubeconfig --name $RECOVERY_EKS --alias $RECOVERY_CONTEXT

kubectl config use-context $PRIMARY_CONTEXT
# In the Production env be careful and use this command kubectl config get-contexts to check what the current context

4.3 Configure OIDC

Each cluster must be configured with an EKS IAM OIDC Provider. See Create an IAM OIDC provider for your cluster. This is a requirement for IAM roles for service account which is used to grant the required AWS permissions to the Velero deployments.

kubectl config use-context $PRIMARY_CONTEXT
eksctl utils associate-iam-oidc-provider --cluster $PRIMARY_EKS --approve --region $REGION

kubectl config use-context $RECOVERY_EKS
eksctl utils associate-iam-oidc-provider --cluster $RECOVERY_EKS --approve --region $REGION2

kubectl config use-context $PRIMARY_CONTEXT
oidc_id_primary=$(aws eks describe-cluster --name $PRIMARY_EKS --region $REGION --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
oidc_id_recovery=$(aws eks describe-cluster --name $RECOVERY_EKS --region $REGION2 --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)

echo oidc_id_recovery= $oidc_id_recovery
echo ACCOUNT= $ACCOUNT
echo oidc_id_primary = $oidc_id_primary

4.4 Set up persistent storage in Amazon EKS useing EBS CSI driver:

  • Download an example IAM policy with permissions that allow your worker nodes to create and modify Amazon EBS volumes:
curl -o example-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/v0.9.0/docs/example-iam-policy.json
  • Create an IAM policy named Amazon_EBS_CSI_Driver:
aws iam create-policy --policy-name AmazonEKS_EBS_CSI_Driver_Policy --policy-document file://example-iam-policy.json
  • View your cluster’s OIDC provider URL
aws eks describe-cluster --name $PRIMARY_EKS --region $REGION --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5
aws eks describe-cluster --name $RECOVERY_EKS --region $REGION2 --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5
  • To deploy the Amazon EBS CSI driver, run one of the following commands:
# PRIMARY_Cluster
kubectl config use-context $PRIMARY_CONTEXT

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"


eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster $PRIMARY_EKS --region $REGION --role-name "AmazonEKS_EBS_CSI_DriverRole" \
--attach-policy-arn arn:aws:iam::$ACCOUNT:policy/AmazonEKS_EBS_CSI_Driver_Policy --approve

kubectl delete pods -n kube-system -l=app=ebs-csi-controller

#make sure that the sa is Annotated with ARN role
kubectl describe serviceAccount ebs-csi-controller-sa -n kube-system
# RECOVERY_Cluster 

kubectl config use-context $RECOVERY_CONTEXT

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"

eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster $RECOVERY_EKS --region $REGION2 --role-name "AmazonEKS_EBS_CSI_DriverRole_recovery" \
--attach-policy-arn arn:aws:iam::$ACCOUNT:policy/AmazonEKS_EBS_CSI_Driver_Policy --approve

kubectl delete pods -n kube-system -l=app=ebs-csi-controller


# make sure that the sa is Annotated with ARN role
kubectl describe serviceAccount ebs-csi-controller-sa -n kube-system
eksctl will Annotate the ebs-csi-controller-sa Kubernetes service account with the Amazon Resource Name (ARN) of the IAM role that will created too by eksctl:

4.5 Prepare S3 to Save Kasten10’s backups :

aws s3 mb s3://$BUCKET --region $REGION 

Although Amazon S3 stores your data across multiple geographically distant Availability Zones by default, compliance requirements might dictate that you store data at even greater distances. Cross-Region Replication allows you to replicate data between distant AWS Regions to satisfy these requirements.

4.6 Prepare IAM policy for Kasten10 deployment :

Kasten10 performs a number of API calls to resources in EC2 and S3 to perform snapshots and save the backup to the S3 bucket. The following IAM policy will grant Kasten10 the necessary permissions

cat > Kasten10.json {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CopySnapshot",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteTags",
"ec2:DeleteVolume",
"ec2:DescribeSnapshotAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeRegions",
"ec2:DescribeSnapshots",
"ec2:DescribeTags",
"ec2:DescribeVolumeAttribute",
"ec2:DescribeVolumesModifications",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DeleteSnapshot",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/Name": "Kasten: Snapshot*"
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:PutObject",
"s3:GetObject",
"s3:PutBucketPolicy",
"s3:ListBucket",
"s3:DeleteObject",
"s3:DeleteBucketPolicy",
"s3:GetBucketLocation",
"s3:GetBucketPolicy"
],
"Resource": "*"
}
]
}
EOF
# Create Katen10 IAM Policy
aws iam create-policy --policy-name KastenPolicy --policy-document file://Kasten10.json

5. Kasten10 Installation :

We should install Kasten10 on both EKS clusters, you can check out the diagram below for more details :

Solution architecture
helm repo add kasten https://charts.kasten.io/

kubectl config use-context $PRIMARY_CONTEXT
helm install k10 kasten/k10 --namespace=kasten-io # --set serviceAccount.create=false
kubectl config use-context $RECOVERY_CONTEXT
helm install k10 kasten/k10 --namespace=kasten-io # --set serviceAccount.create=false

To establish a connection to it use the following `kubectl` command:


# RECOVERY_Cluster
kubectl config use-context $PRIMARY_CONTEXT

kubectl -namespace kasten-io port-forward service/gateway 8080:8000

# The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
# open new terminal and run thoses for RECOVERY_Cluster

kubectl config use-context $RECOVERY_CONTEXT
kubectl -namespace kasten-io port-forward service/gateway 8090:8000

# The Kasten dashboard will be available at: `http://127.0.0.1:8090/k10/#/`

Annotate the K10-k10 sa Kubernetes service account with the Amazon Resource Name (ARN) of the IAM role that will be created too by eksctl:


# PRIMARY_EKS

# Change the context
kubectl config use-context $PRIMARY_CONTEXT

# delete the sa
kubectl delete serviceAccount k10-k10 -n kasten-io


# Create iamserviceaccount and linked with new k10-k10 sa
eksctl create iamserviceaccount --name k10-k10 --namespace kasten-io --cluster $PRIMARY_EKS --region $REGION --role-name "KastenRole" --attach-policy-arn arn:aws:iam::$ACCOUNT:policy/KastenPolicy --approve


kubectl describe serviceAccount k10-k10 -n kasten-io
# RECOVERY_EKS

# Change the context
kubectl config use-context $RECOVERY_CONTEXT

# delete the sa
kubectl delete serviceAccount k10-k10 -n kasten-io

# Create iamserviceaccount and linked with new k10-k10 sa
eksctl create iamserviceaccount --name k10-k10 --namespace kasten-io --cluster $RECOVERY_EKS --region $REGION2 --role-name "KastenRecoverRole" --attach-policy-arn arn:aws:iam::$ACCOUNT:policy/KastenPolicy --approve

# make sure that sa is Annotated with ARN role
kubectl describe serviceAccount k10-k10 -n kasten-io
eksctl will Annotate the k10-k10 Kubernetes service account with the Amazon Resource Name (ARN) of the IAM role that will created too by eksctl:

6. Backup and restore Ghost application:

Ghost is an open-source publishing platform designed to create blogs, magazines, and news sites. It includes a simple markdown editor with preview, theming, and SEO built-in to simplify editing.

We will use the Bitnami Helm chart as it’s commonly deployed and well-tested. This chart depends on the Bitnami MariaDB chart that will serve as the persistent data store for the blog application. The MariaDB data will be stored in an EBS volume that will be snapshotted by Velero as part of performing the backup.

Now we switch to the Primary cluster’s context and install Ghost (ignore the notification ERROR: you did not provide an external host that appears when you install Ghost. This will be solved with the following commands):

helm repo add bitnami https://charts.bitnami.com/bitnami


kubectl config use-context $PRIMARY_CONTEXT
helm install ghost bitnami/ghost \
--create-namespace \
--namespace ghost


export APP_HOST=$(kubectl get svc --namespace ghost ghost --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
export GHOST_PASSWORD=$(kubectl get secret --namespace "ghost" ghost -o jsonpath="{.data.ghost-password}" | base64 -d)
export MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace "ghost" ghost-mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d)
export MYSQL_PASSWORD=$(kubectl get secret --namespace "ghost" ghost-mysql -o jsonpath="{.data.mysql-password}" | base64 -d)

helm upgrade ghost bitnami/ghost \
--namespace ghost \
--set service.type=LoadBalancer,ghostHost=$APP_HOST,ghostPassword=$GHOST_PASSWORD,mysql.auth.rootPassword=$MYSQL_ROOT_PASSWORD,mysql.auth.password=$MYSQL_PASSWORD

We can check that the installation was successful by running this command:

kubectl get pod -A

In the Ghost Admin console, you can create an example blog post that will be included in the backup and restore process by signing in (using the Admin URL displayed above). As a result, the backup includes not only the application deployment configuration but also the posts in the blog database that is saved in PV — EBS.

6.1 Backup Ghost application

Open a new terminal and run those commands

kubectl config use-context $PRIMARY_CONTEXT

kubectl --namespace kasten-io port-forward service/gateway 8080:8000

Open this link http://127.0.0.1:8080/k10/#/ in your browser

Go to settings and choose locations, in this page you will configure your S3 Bucket that has already been created to save your EKS backup files.

echo $BUCKET

Now go to Dashboard → Applications → Ghost app → Create a Policy

and configure your policy to take a frequent snapshot as below:

Go to the polices section and chack-out your new one and try to run it once.

In this policy please Click on “Show more details” as below and save this token, because we need it to configure restore policy in the recovery EKS

After finishing this job Go to the S3 bucket and Snapshot section in the AWS console to follow up on the changes as below:

Kasten10 Dashboard
Snapshots in AWS Console
S3 Bucket in AWS Console

6.2 Restore Ghost application

Open a new terminal and run those commands

kubectl config use-context $RECOVERY_EKS
kubectl --namespace kasten-io port-forward service/gateway 8090:8000

Open this link http://127.0.0.1:8090/k10/#/ in your browser

Go to settings and choose locations, in this page you will configure your S3 Bucket that has already been created to restore your EKS backup files.

echo $BUCKET

Now go to Dashboard → Applications → Ghost app → Create a Policy

Now go to Dashboard → Applications → Ghost app → restore

choose one of PIT to restore your application and restore it

Note: you can configure the policy to restore after import

Summary :

In conclusion, Kasten10 is a powerful data management solution that simplifies backup, recovery, and mobility of Kubernetes applications. By deploying Kasten10 on AWS, users can take advantage of the scalability and flexibility of the cloud to protect their applications and data. With Kasten10’s comprehensive features, including backup scheduling, policy-based automation, and efficient data storage, users can ensure the availability and integrity of their Kubernetes workloads.

To deploy Kasten10 on AWS, users can choose from a variety of deployment options, including Amazon EKS, self-managed Kubernetes clusters, and managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS) and Amazon EKS Anywhere. Regardless of the deployment option, Kasten10 provides a seamless experience for backup and recovery, with support for various storage providers, including Amazon S3, Amazon EBS, and Amazon EFS.

Overall, implementing Kasten10 on AWS provides a robust solution for Kubernetes data management, with the flexibility and scalability of the cloud. Whether you’re managing a small-scale deployment or a large-scale enterprise cluster, Kasten10 and AWS have you covered.

👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇

🚀Join FAUN Developer Community & Get Similar Stories in your Inbox Each Week


How to protect AWS EKS from regional disasters using Kasten10 was originally published in FAUN Publication on Medium, where people are continuing the conversation by highlighting and responding to this story.

Share the post

How to protect AWS EKS from regional disasters using Kasten10

×

Subscribe to Top Digital Transformation Strategies For Business Development: How To Effectively Grow Your Business In The Digital Age

Get updates delivered right to your inbox!

Thank you for your subscription

×