Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Swarm the Seas: Empowering Global Shipping with Docker Swarm Mastery

Imagine a world where global shipping operates with flawless precision, where containers effortlessly traverse continents, and where data loss and disrupted workloads are nothing but distant nightmares. Today, we unveil a game-changing project that brings this vision to life, powered by the remarkable capabilities of Docker Swarm.

In the fast-paced realm of containerized logistics, time is of the essence. Yet, traditional infrastructure struggles to keep up with the demands of a global shipping application, leading to delays, data loss, and missed opportunities. But fear not, for we are about to embark on a transformational journey that will reshape the landscape of container orchestration.

Enter Docker Swarm, the orchestrator that will revolutionize the way your containers navigate the world. It is time to bid farewell to the chaos and embrace a future where containerized workloads are seamlessly managed, scaled, and restored. Our project is the key to unlocking the true potential of your global shipping application.

Join us as we unravel a step-by-step guide to setting up Docker Swarm, transforming your infrastructure into a well-oiled machine that sails through challenges with unrivaled resilience. From configuring your networked hosts to establishing global services that transcend borders, we will equip you with the knowledge and expertise to steer your containers to success.

Together, we will venture into a world where downtime is a thing of the past and scalability is an inherent virtue. With Docker Swarm at the helm, we will harness the full power of container orchestration, redefining the way global shipping operates in the digital age.

So, prepare to embark on a voyage like no other. The horizon is filled with opportunities, and the winds of innovation are at our back. Welcome to a future where global shipping meets the limitless potential of Docker Swarm. Are you ready to set sail?

Top Five Reasons to Use Docker Swarm:

High availability and fault tolerance: Docker Swarm allows you to create a cluster of Docker nodes, forming a swarm. It ensures high availability by distributing containers across multiple nodes. If a node fails, the swarm will automatically reschedule the containers on other healthy nodes, ensuring the continued operation of your applications.

Scalability: Docker Swarm simplifies scaling your applications. You can easily scale up or down the number of replicas of a service running in the swarm, based on the demand. This enables you to handle increased traffic or workload without manual intervention.

Service discovery and load balancing: Docker Swarm provides built-in service discovery and load balancing. You can define services within the swarm, and Swarm’s DNS-based service discovery allows other services to discover and communicate with them. Swarm load balances incoming traffic across replicas of a service, distributing the load evenly and optimizing resource utilization.

Security: Docker Swarm includes security features such as mutual TLS (Transport Layer Security) encryption for inter-node communication and built-in support for secrets management. It ensures that communication between nodes and services is secure and sensitive data, such as passwords and API keys, can be securely stored and accessed by services.

Easy management and deployment: Docker Swarm provides a simple and straightforward way to manage and deploy containers. It leverages the familiar Docker CLI commands, allowing you to manage swarms, services, and containers using the same commands you use for standalone Docker environments. This makes it easy to adopt and integrate Docker Swarm into your existing Docker workflows.

Overall, Docker Swarm simplifies the management, deployment, and scaling of containerized applications in a distributed environment, providing high availability, scalability, and security features out of the box.

Project Scenario:
Your company is new to using containers and has been having issues with containerized workloads failing for their global shipping application. The company is worried about additional potential data loss as they have no way to reschedule containers that are failing.

The company has one week to update its infrastructure and leverage the use of Docker Swarm so that the data can be restored for the containers that are no longer in a healthy state. Because they are not familiar with Docker Swarm, they will need a step-by-step guide on setting up a Swarm for their containerized workloads to assist with container orchestration.

The solution should include the use of a global service since the company’s global shipping application is having issues. Your Swarm should be able to launch the containers using a global service.
A “service” is simply a group of containers of the same image that facilitates the scaling of applications.
Your global shipping application will require the use of at least three networked hosts which should be AWS EC2 Ubuntu 22.04 instances.

Essentials:
  • Visual Studio Code
  • Docker Account and (VS Code) Docker Extension
  • Remote Development Extension (VS Code)
  • Free Tier AWS Account

Junior Engineer Scenario:
As the Junior Engineer, you are responsible for only configuring the Docker Swarm and providing your team with a step-by-step guide to setting up the Swarm.

Foundational Objectives:
  • Install Docker on All Hosts and Verify the Active Status
  • Change the Hostname on Each Node to identify the Master Node and the Two Worker Nodes
  • Validate the Worker Nodes All Share the Same Security Group as the Master Node
  • Verify SSH key is Added to Each Node
  • Run the Necessary Containers Using the Command Line Interface (CLI)
  • Create the Swarm Using One Master Node and Two Worker Nodes
  • Verify the Status of the Docker Nodes

Step 1: Create 3 EC2 Instances and Install Docker

  • We need to create 3 Ubuntu (t2.mciro) EC2 instances
  • Log in to AWS and go to the AWS EC2 Dashboard
  • Click on the orange “+Launch Instances” button

Name the first EC2 instance as the MasterNode.

** Note that instances 2 & 3 will be named WorkerNode1 and WorkerNode2 respectively.**
  • Select the Ubuntu 22.04 AMI
  • Instance Type: t2.micro
  • Create a Key Pair — Be sure to select RSA and “.pem” as the private key file type.

Be sure to take note of where your “SwarmKP.pem” was placed when it was downloaded. We will need this information later.

  • Network Settings: Use your Default VPC
  • Subnet: No preference
  • Auto Assign Public IP address
  • Firewall: Select Create a security group and scroll down to see the appropriate Security Group Rules

For the Inbound rules:

  • Allow HTTP to port 80 on 0.0.0.0/0
  • Allow all traffic to your Default VPC CIDR Block
  • Allow SSH to port 22 on 0.0.0.0/0
  • Open the Advanced Details tab and scroll to the bottom to add User Data.
  • The User Data script will install Docker upon the creation of the instance.

#!/bin/bash


curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
  • Launch instance
  • Follow these steps and create Two more EC2 instances.
Only Change the Names when creating the two other EC2 instances
All other settings will be the exact same as the MasterNode Instance

Once all instances are created they should all be running after a few minutes and we can move on to Visual Studio Code.

Step 2. Connect to the MasterNode instance using Visual Studio Code

Open VSCode and make sure the Essential Extensions are Installed:

  • Remote SSH
  • Docker

Once those are installed: On the bottom left of the VSCode home screen you will see the SSH button.

CLICK THE Blue  Button
  • This Screen will populate. Select the “Connect to Host..” option.
  • Select “Configure SSH Hosts”
  • Select the first option to follow the path of your username\config file

VERY IMPORTANT:

Host MasterNode (This can be any name you want)

HostName “Grab the MasterNode Public IP Address”

Please take note that you will have to change this IP address every time your MasterNode EC2 instance restarts

User ubuntu

IdentityFile “File Path to the SwarmKP.pem file”

Save the File then close it.

Go through the SSH steps again but this time we will be able to SSH into the Ec2 instance. Follow below.

  • Select “Connect to Host”
  • Select your “SSH Hostname”
  • A New VSCode window will Populate.
  • Select Linux
  • Promptly select Continue
  • You are now in the MasterNode EC2 instance

Step 3: Access WorkerNode1 & WorkerNode2 through SSH

  • Access the Terminal
  • From the command line verify Docker is installed
sudo systemctl status docker

First things first we need to change our Terminal name:

  • Use the following command to change the hostname then reboot.
sudo hostnamectl set-hostname MasterNode
sudo reboot

Allow the system to restart itself over a couple of minutes

  • Once the window reloads you will now have the name “MasterNode”

From here we need to configure the Private Key File so we can SSH into WorkerNode2 and WorkerNode3 from the terminal.

  • Use the command to navigate to the Root directory and change directories into the “.ssh” file
sudo -i

cd .ssh
  • Create a file with the same name as your Private Key File.pem — 
    SwarmKP.pem”
  • Open your SwarmKP.pem folder from your local downloads folder and copy and paste the entire RSA key into the Nano File.
  • Ctrl+O and enter will save the file
  • Ctrl+X will exit the file

From the same “.ssh” Directory we need to create a config file with the Private IP address and configuration to access the two Worker EC2 Instances.

Host WorkerNode1
HostName "Private IP Address for WorkerNode1"
User ubuntu
IdentityFile /root/.ssh/SwarmKP.pemHost WorkerNode2

Host WorkerNode2
HostName "Private IP Address for WorkerNode1"
User ubuntu
IdentityFile /root/.ssh/SwarmKP.pem
  • Ctrl+O and enter will save the file
  • Ctrl+X will exit the file

In order to SSH we first need to change the permissions for the SwarmKP.Pem file to read-only for the owner.

  • Enter this command:
chmod 400 SwarmKP.pem

Verify permissions are changed:

ls -l 

Open another Terminal Window

SSH into “WorkerNode1”:

** Take note that the terminal is Case Sensitive and you must start every command with “sudo”.**

sudo ssh WorkerNode1

Open a 3rd terminal Window and SSH into “WorkerNode2"

Congrats: Use the tab to the right to switch between the different terminals

Change the Hostnames for the two WorkerNode Terminals and reboot.

Once the reboot is complete. SSH back into the Instances!

Step 4: Create the Swarm

Grab the MasterNode Private IP Address

Run the following command to initialize the Docker Swarm:

sudo docker swarm init --advertise-addr 
  • Copy the “docker swarm join — token” command from the MasterNode terminal and enter the command into the WorkerNode terminals.
  • Be sure to add “sudo” to the beginning of the command.
  • From the MasterNode terminal type the following command to verify the nodes are attached!
sudo docker node ls

Congratulations the Junior Engineer Portion of the Project is Complete!

Lead Engineer Scenario:
As the Lead Engineer, you've reviewed the Swarm configuration, and you are now ready to deploy your services on Swarm!

Advanced Objectives:
  • Using the Command Line Interface (CLI), SSH into Master Node, Run the Command to Create the Service Using an Official Apache Image
  • Launch 1 Replica and Verify the Apache image is Created and Running
  • Using the Command Line Interface (CLI), Scale the Apache Image to 3 Replicas
  • Verify the Apache Image Has Been Scaled

Step 1: Run the Command to Create the Service Using an Official Apache Image

  • From the MasterNode run the following command to create an Apache Image.
  • HTTPD represents an Apache Image and “:latest” will pull the lastest version of Apache.
sudo docker service create --name apache_1 --replicas 1 -p 80:80 httpd:latest

Scale the Image to 1 by running the following command

sudo docker service scale apache_1=1

Verify the Apache image is created and running

sudo docker service ps apache_1

The Apache image is up and running! Now let’s run the command to scale it to 3!

sudo docker service scale apache_1=3

Verify they are created and running successfully:

sudo docker service ps apache_1

Grab the Public IP Address from any of your AWS EC2 instances and type it in a new URL search Bar and watch your Apache Image come alive!!

Congratulations as the Lead Engineer your Portion is Complete

Senior Engineer Scenario:
As the Senior Engineer, you are looking to make this deployment as seamless as possible, and you’d like to use a more complex approach.

Complex Objectives:
  • Deploy the Apache Image Via a Stack to your Swarm

Step 1: Create the Stack

  • Create a YAML file using Nano.
You can name it whatever you like but be sure that “.yml” is at the end.

Inside the text editor insert the following stack:

version: '3.8'

services:
apache:
image: httpd:latest
ports:
- "8080:80"
  • Ctrl+O and enter will save the file
  • Ctrl+X will exit the file

Step 2: Deploy the stack

  • This next command will deploy the stack by calling on the .yml file we created.
sudo docker stack deploy -c docker-stack.yml dock-stack

Verify the image is created by using the following command:

sudo docker service ls

Let’s scale this bad boy!

  • Jump back into the docker-stack.yml file via the nano text editor
nano docker-stack.yml

Now we want to deploy the updated stack with 5 replicas

version: '3.8'

services:
apache:
image: httpd:latest
ports:
- "8080:80"
deploy:
replicas: 5
  • Ctrl+O and enter will save the file
  • Ctrl+X will exit the file
Run the same previous command to deploy the stack.
sudo docker stack deploy -c docker-stack.yml dock-stack

Step 3: Verify the Replicas are Created

Use the same command as before and let’s check it out

  • List the services
sudo docker service ls

Congratulations Senior Engineer Portion of the Project is Complete

As you can see the Senior Engineer works much smarter!

After you are satisfied with the project come back here to learn how to tear it all down!

Delete Stack:

From the MasterNode terminal and home directory run the following commands to delete the created Stack.

  • To list the stack name:
sudo docker stack ls
  • Use the name of your stack in the following command:
sudo docker stack rm dock-stack
Delete YAML file:

Run the following command to remove the “docker-stack.yml” file:

sudo rm docker-stack.yml
Delete Docker Service:

Run the following command to remove the docker service created in the Lead Engineer portion of the project:

  • List the service
sudo docker service ls
  • Take note of the first three characters of the service “ID”. Add those to the end of your command to remove the service.
  • Remove the service
sudo docker service rm "First three characters of the ID"
Delete Image
  • List the image(s)
sudo docker image ls
  • Take note of the first three characters of the image “ID”. Add those to the end of your command to remove the image.
sudo docker image rm "First three characters of the ID"
You are now left with a shell where you can always come back and spin up some more fun!

Don’t forget to return back to the AWS dashboard and stop the 3 running instances

  • MasterNode
  • WorkerNode1
  • WorkerNode2
Hit the follow button on Medium and go add me on LinkedIn!!!

Come back here for more DevOps fun!

Feel free to connect and chat using the linked information below!

LinkedIn: https://www.linkedin.com/in/brandon-mccullum-4504b7161/


Swarm the Seas: Empowering Global Shipping with Docker Swarm Mastery was originally published in TechManyu on Medium, where people are continuing the conversation by highlighting and responding to this story.



This post first appeared on TechManyu, please read the originial post: here

Share the post

Swarm the Seas: Empowering Global Shipping with Docker Swarm Mastery

×

Subscribe to Techmanyu

Get updates delivered right to your inbox!

Thank you for your subscription

×