Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Containers Demystified 🐳🤔


Posted on Sep 28 Containers virtualize an operating system and connect to the underlying kernel of a computer which allows each container to define system dependencies needed to run the code they contain and isolates them from other containers and the host operating system. This is similar to how Virtual Machines isolate software but VMs utilize a Hypervisor and abstract the hardware layer. This is why a linux VM can run on windows but a linux container needs to run on a device that has a linux kernel. Virtual Machines require more resources, management and are slower to boot up where containers in contrast are very quicker to start and require limited resources to run. In most production setups, Virtual Machines will be provisioned and Containers will be ran on the Virtual Machines which allows for a lot of flexibility. most popular container solution and one we will be using is Docker and containers are commonly referred to as Dockers even if Docker is not the container runtime. If you are using a Mac or PC you can install Docker Desktop and if you are using a linux server then you can Install Docker and Docker Compose directly. The VS Code Docker Extension is also useful for visualizing docker resources.🛑 Docker Desktop is free for personal use but requires a paid subscription for enterprises. A free open source alternative is Podman and Podman Compose which uses the same core syntax as the Docker tools and can be used as a near drop-in replacement.Much like code is committed to a code repository, container images are committed to a container repository and versioned with tags. The main container repository is DockerHub where most popular software can be found. A good example container repository to take a look at is the popular web server and reverse proxy NGINX which shows an overview of how to use the image along with tags for the various versions. A common pattern for container image tags is appVersion-operationSystem so in the case of NGINX using the image nginx:1.25-alpine indicates we are using NGINX version 1.25 running on an alpine linux image.In order to first interact with an image once you have docker installed you will need to pull it from a container repository.Check your local images with the image command.Run the local container image giving it a name and mapping our local computer's port 8080 to the container port 80.💡 Running with the -d option will run it detached as a background processNavigate to http://localhost:8080 and you should now see the NGINX welcome page.The server can be stopped with ctrl+c and the stopped container can be viewed by appending the -a flag.Before re-running a container with the same name it will need to be removed and you can do this via its Container ID or Name, we will use the name since it is easy to reference.Containers are ephemeral and when they are running they may create some files on disk but when they are shutdown and deleted all of those files are also deleted and the next time the container starts they will need to be re-created. The container may also require files when it initially starts such as a configuration file or directory of some sort. This is where persistent storage comes into play for mapping existing files to containers and/or persisting files after a container is shutdown and deleted. The main options that are used are Bind Mounts and Volumes. the container like we just did is useful for initial testing but anyone using a web server uses it to host custom files or services. Create the following nginx.conf and index.html files and we will map them from our local filesystem into our container with a Bind Mount.📝 nginx.conf📝 index.htmlRun the NGINX container with volume flags -v local_file_path:container_file_path using our custom config and HTML bind mounted to the container.Navigate back to http://localhost:8080 and you should now see our custom HTML page being served!The NGINX container is using bind mounts to map local files to the container but when the container is generating data such as when it is running a database the preferred method to persist data which does not rely on the host file system structure is using Volumes. Volumes are managed by docker and are therefor easier to create without needing to know a good host path to mount to. This is an example of running a MongoDB which uses a Docker volume to persist data.Attach an interactive shell to the live container to test adding a document to the database.The data has been inserted and we can verify that the volume has been created with the docker volume command.Make sure the volume persists data even after we stop, delete and restart it.Running dockers from the command line with docker run is useful for quick testing but not commonly used in practice. The better way to run one or more dockers on a single server with all of the changes and requirements documented is to use Docker Compose. Docker Compose uses a YAML spec to define which containers to run with their required configurations and aligns with Infrastructure as Code (IaC) best practices since it can be committed to a code repository.Create a docker-compose.yml file which will run the same NGINX container setup as the previous docker run command.📝 docker-compose.ymlWithin the directory you created the docker-compose.yml file run the following command to start the container.Navigate back to http://localhost:8080 and we can see our same custom NGINX server is being ran.Break the process with ctrl+c and try running the container as a background process.This is now running in the background as a detached process and can be verified with the docker ps commandIn order to stop a detached docker compose container run the stop command in the same directory that the docker-compose.yml file is in.To stop and remove the container use down.The Docker Compose spec allows you to define all of your required containers as separate services within the same file. Lets add our previous mongoDb volume example to this same file.💡 We are using the docker compose long syntax for volumes📝 docker-compose.ymlWe can bring both of these containers up with the same one command now.If you need to start or stop only one of the containers then you can use the same commands and just add the service name of the container you want to use.Environment variables are used in all applications and they allow you to make applications more flexible to run in multiple environments and keep secrets out of source code where they could become a security risk. There are a few ways to use environment variables with docker compose files, I mainly use a couple to make the compose spec more flexible by using ${VAR} variable substitution syntax or by passing external environment variables directly to the container.In the previous Mongo example spec we could create a .env file in the same directory and define a TAG environment variable. At this point we can update the spec to use ${TAG} as the version which docker compose will automatically substitute with 6.0.1.📝 .env📝 docker-compose.yml💡 .env is used by default but the --env-file flag can be passed to the docker compose command to specify which env file to useIn the mongo example we are running the database without a login which should never be done in production. So we can set an initial username and password with MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD environment variables. We should not add these into the spec directly since they would be exposed when we commit it to our code repository. This is where we can use the env_file option to tell docker compose to pass the variables from our .env file directly into the Mongo container.📝 .env📝 docker-compose.ymlWe have covered how to use popular docker images such as NGINX and Mongo but for containerizing custom software we will need to create our own docker images.The way to define how docker images are created is to use a Dockerfile which typically inherits FROM a base image and then adds our custom code and system dependencies. Lets take a look at how we can create a simple containerized python script which makes an HTTP call with the requests library. Create the following and Dockerfile files in the same directory.📝📝 DockerfileNow that we have our Dockerfile and Python script we need to build the local docker image.We can verify that our image was built successfully with docker image commands.Now that we have our custom image built we can run it the same way that we have ran the NGINX or Mongo image.We have successfully containerized an application! 🎉 This is very useful but the container image currently only exists on our local laptop so in order to share the image it needs to be pushed to a container repository. The most popular container repository is DockerHub, create an account, login and then create a Personal Access Token so we can push our images to a repository.After logging in we can update our local image tag with a tag containing the remote container repository which will have the syntax your_username/container_image_name.🛑 The dockerhub container repositories are public by default so always be careful that you do not push any sensitive information in the containerThe image should now show up with the container repo in your local images and also be pushed up to the centralized container repo.Going forward for any container which is intended to be pushed to a container repo you can just build it with the repository path from the start.Docker builds are Architecture specific so if you are using an Apple Silicon Macbook for example then the images will be built for ARM64 CPUs but most servers use AMD64 CPUs. In order to create a multi-architecture build you will need to use buildx. Once we start this build we can see it go through the same steps for each architecture. You can also use the --push flag to automatically push it to the container repo after building.Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well Confirm For further actions, you may consider blocking this person and/or reporting abuse shun - Aug 12 Shrijal Acharya - Aug 8 Mohamed El Eraky - Jul 22 Mohamed El Eraky - Jul 12 Once suspended, dpills will not be able to comment or publish posts until their suspension is removed. Once unsuspended, dpills will be able to comment and publish posts again. Once unpublished, all posts by dpills will become hidden and only accessible to themselves. If dpills is not suspended, they can still re-publish their posts from their dashboard. Note: Once unpublished, this post will become invisible to the public and only accessible to Deon Pillsbury. They can still re-publish the post if they are not suspended. Thanks for keeping DEV Community safe. Here is what you can do to flag dpills: dpills consistently posts content that violates DEV Community's code of conduct because it is harassing, offensive or spammy. Unflagging dpills will restore default visibility to their posts. DEV Community — A constructive and inclusive social network for software developers. With you every step of your journey. Built on Forem — the open source software that powers DEV and other inclusive communities.Made with love and Ruby on Rails. DEV Community © 2016 - 2023. We're a place where coders share, stay up-to-date and grow their careers.

This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Containers Demystified 🐳🤔