Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Learn how networking is implemented in containers

When you move beyond working with a single container you need a good understanding of how containers are networked. A Docker container needs a physical or virtual machine Host. The Docker daemon and client run on the host which enables interaction with the Docker registry and container management activities such as starting and stopping containers. Each host will be running many containers. The need for networking will always be there when using a single host or a cluster of hosts. In a single host environment, the problem is moving data in a shared volume through a networking protocol such as HTTP or any other appropriate protocol.

A shared volume has the advantages of simplicity and speed but it suffers from the downside of difficulty in converting a single host environment to a multi host environment. In a multi host environment, there are two challenges that need to be overcome. The first challenge is the communication between containers in a host and the second challenge is mapping communication paths between hosts. Any decisions made need to factor in security and performance. Multi host environments become relevant when the capacity of a single host is exceeded or when there is a need to use distributed solutions such as Spark.

For any communication to happen outside a host, two requirements need to be satisfied. The first requirement is the host machine needs to be able to forward IP packets. The second requirement is the host iptables allow connections to happen. The ip_forward parameter set at the system level controls IP forwarding. A parameter setting of 1 allows packet forwarding. The default Docker server setting of –ip-forward=true will set the ip_forward parameter to 1. The command sysctl net.ipv4.conf.all.forwarding is used to check the status of IP forwarding. When the parameter value is not 1 it can be set to 1 using this command sudo sysctl net.ipv4.conf.all.forwarding=1.

To enable communication between containers, there are two operating system level requirements that need to be satisfied. The first requirement is the network topology needs to support connection to container interfaces. The second requirement is iptables need to allow the type of connection.

Docker networking is the native solution supported by Docker and it can be implemented in four different modes which are Bridge, host, container and no networking. The focus of this article will be to exhaustively discuss the different networking modes.

When Docker is installed, there are three networks namely bridge, host and no networking that are created. To check that these networks are installed use the command sudo docker network ls.

These networks are built in so the –network flag can be used to specify the network that will be used for communication. The default network used is bridge. The bridge enables communication among containers and communication between host and its containers. To display information about the bridge as part of host networks the ifconfig command is used.

In the bridge mode, a docker0 virtual ethernet bridge which handles packet forwarding is created by the daemon. The default behavior of the daemon is that all containers on a host are connected to the internal network by creating peer interfaces.

To display bridge network information the sudo docker network inspect bridge command is used.

The bridge information is returned as a JSON object showing the containers that are running on the network, options set, subnet and gateway. The subnet and gateway are automatically created and containers are also automatically added. When there are containers running on the network, their networking information will also be provided.

Containers within the same network use IP addresses. Automatic service discovery is unsupported on bridge. An example of using Docker bridge is shown below.

sudo docker run -d -P --net=bridge nginx:1.9.1


In the host mode, a container attaches to the host network which exposes a container to public network. Therefore the container and the host are not isolated. For example, a container running on port 80 is also available on port 80 on the host. This approach is faster than the bridge but it has the security challenges associated with exposure to the public network.

The no networking mode places a container within its own network stack without configuring it. In this approach, networking is turned off and it is useful for two use cases. The first use case is containers without the need for a network and the second use case is when you would like to implement custom networking. An example of no networking is shown below:

sudo docker run -d -P --net=none nginx:1.9.1

In the container mode, you instruct docker to reuse a network name-space from an existing container. This approach is relevant when there is a need for custom network stacks

The recommended approach is using customized bridge networks to manage container communication and enable DNS resolution. Using the default network drivers in Docker you are able to create a bridge, overlay or MACVLAN network. Customization is supported through a network plugin or a remote network.

Although the bridge network has similarities with the default bridge it brings new features and sheds some features of the default network. An example of creating a bridge network is shown below:

docker network create --driver bridge bridge_net1

To return information about the network we have created, we use the command

docker network inspect bridge_net1


List networks using this command docker network ls and you will find the bridge network we created among them.

After you have created your network you use it by passing it to –net flag. It is important to note that all containers in this type of network have to be within a single host. A bridge network is relevant when you need to run a small network. When you need to run a large network, you use an overlay network.

An overlay network can be used with swarm mode or without swarm mode. Using overlay network without swarm mode is not recommended for most common use cases. An example of creating an overlay network with swarm mode is shown below.

sudo docker network create   --driver overlay   --subnet 10.0.9.0/24   multiple-hosts-net

After creating the network, we can use it as shown below

docker service create --replicas 2 --network multiple-hosts-net --name web-server

In this article, we introduced container networking in Docker. We identified the challenges that need to be overcome in single and multiple host networking. We discussed the built in bridge, none, host and container network modes. Finally we discussed the user defined bridge and overlay networks.

The post Learn how networking is implemented in containers appeared first on Eduonix.com | Blog.



This post first appeared on How And When Should You Use HBase NoSQL DB, please read the originial post: here

Share the post

Learn how networking is implemented in containers

×

Subscribe to How And When Should You Use Hbase Nosql Db

Get updates delivered right to your inbox!

Thank you for your subscription

×