Tricks of the Trades

Docker - Daemon Administration and Networking (3)

Docker Logo Image

Preamble

This time we are beginning by centering around the Docker daemon and how it interacts with various process mangers from different platforms. Followed up by an introduction to networking in Docker that uses more of the Docker training images to link together and create a basic network of containers. Specifically a PostgreSQL database container and a Python webapp container.

This is post three on Docker following on from Docker - Administration and Container Applications (2). If you’re looking for more generalised administration and basic example uses of the Docker Engine CLI then you may want to read that post instead.


1 – Docker Daemon Administration

The Docker daemon is the background service that handles running containers and all their states.

The starting and stopping of the Docker daemon is often configured through a process manager like systemd or Upstart. In a production environment this is very useful as you have a lot of customisable control over the behaviour of the daemon.

It can be run directly from the command line though instead of this:

1
$ docker daemon

It listens on the Unix socket - unix:///var/run/docker.sock when active and running.

If you’re running the docker daemon directly like this you can append configuration options to the command.

An example of running the docker daemon with configuration options is as follows:

1
$ docker daemon -D --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376
  • -D --debug=false – Enable or disable debug mode.
  • --tls=false – Enable or disable TLS.
  • --tlscert= – certificate location.
  • tlskey= – key location.
  • -H --host=[] – Daemon socket(s) to connect to.

More options are on offer for the Docker daemon at the link before the last code block.

Upstart

The default Docker daemon Upstart job is found in /etc/init/docker.conf .

To check the status of the daemon:

1
$ sudo status docker

To start the Docker daemon:

1
$ sudo start docker

Stop the Docker daemon:

1
$ sudo stop docker

Or restart the daemon:

1
$ sudo restart docker

Logs for Upstart jobs are found in /var/log/upstart and are compressed when the daemon is not running. So run the daemon/container to read the active log file - docker.log via:

1
$ sudo tail -fn 15 /var/log/upstart/docker.log

systemd

Default unit files are stored in the subdirectories of /usr/lib/systemd and /lib/systemd/system . Custom user created unit files are kept in /etc/systemd/system .

To check the status of the daemon:

1
$ sudo systemctl status docker

To start the Docker daemon:

1
$ sudo systemctl start docker

Stop the Docker daemon:

1
$ sudo systemctl stop docker

Or restart the daemon:

1
$ sudo systemctl restart docker

To ensure the Docker daemon starts at boot:

1
$ sudo systemctl enable docker

Logs for Docker are viewed in systemd with:

1
$ journalctl -u docker

A more in-depth look at systemd and Docker is kept here in the Docker docs:

Docker Documentation - systemd


2 – Process Manager Container Automation

Restart policies are an in-built Docker mechanism for restarting containers automatically when they exit. These must be set manually with the flag - --restart="yes" and are also triggered when the Docker daemon starts up (like after a system reboot). Restart policies start linked containers in the correct order too.

If you have non-Docker processes that depend on Docker containers you can use a process manager like upstart, systemd or supervisor instead of these restart policies to replace this functionality.

This is what we will cover in this step.

Note: Be aware that process mangers will conflict with Docker restart policies if they are both in action So don’t run restart policies if you are using a process manager.

For these examples assume that the container’s for each have already been created and are running Ghost with the name --name=ghost-container .

Upstart

/etc/init/ghost.conf
1
2
3
4
5
6
7
8
description "Ghost Blogging Container"
author "Scarlz"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
/usr/bin/docker start -a ghost-container
end script

Docker automatically attaches the process manager to the running container, or starts it if needed with this setup.

All signals from Docker are also forwarded so that the process manager can detect when a container stops, to correctly restart it.

If you need to pass options to the containers (such as --env) then you’ll need to use docker run rather than docker start in the job configuration.

For Example:

/etc/init/ghost.conf
1
2
3
script
/usr/bin/docker run --env foo=bar --name ghost-container ghost
end script

This differs as it creates a new container using the ghost image every time the service is started and takes into account the extra options.

systemd

/etc/systemd/system/ghost
1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Ghost Blogging Container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a ghost-container
ExecStop=/usr/bin/docker stop -t 2 ghost-container

[Install]
WantedBy=local.target

Docker automatically attaches the process manager to the running container, or starts it if needed with this setup.

All signals from Docker are also forwarded so that the process manager can detect when a container stops, to correctly restart it.

If you need to pass options to the containers (such as --env), then you’ll need to use docker run rather than docker start in the job configuration.

For Example:

/etc/systemd/system/ghost
1
2
ExecStart=/usr/bin/docker run --env foo=bar --name ghost-container ghost
ExecStop=/usr/bin/docker stop -t 2 ghost-container ; /usr/bin/docker rm -f ghost-container

This differs as it creates a new container with the extra options every time the service is started, which stops and removes itself when the Docker service ends.


3 – Docker Networks

Network drivers allow containers to be linked together and networked. Docker comes with two default network drivers as part of the normal installation:

  • The bridge driver.
  • The overlay driver.

These two drivers are replaceable with other third party drivers that perform more optimally in different situations. But for low end basic Docker use these given defaults are fine.

Docker also automatically includes three default networks with the base install:

1
$ docker network ls

Listing them as:

Output
1
2
3
4
NETWORK ID          NAME                DRIVER
2d41f8bbf514 host host
f9ee6308ecdd bridge bridge
49dab653f349 none null

The network named bridge is classed as a special network. Docker launches any and all containers in this network (unless told otherwise).

So if you currently you have containers running these will have been placed into the bridge network group.

Networks can be inspected using the next command, where bridge is the network name to be inspected:

1
$ docker network inspect bridge

The output shows any and all configured directives for the network:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[
{
"Name": "bridge",
"Id": "f9ee6308ecdd5dc5a588428469de1b7c475fdafdab49cfc33c1c3ac0bf0559ab",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Containers": {
"ff98b5ed01dd4323f0ce38af9b8cea2d49d0b1e194cf147a3a8f632278a11451": {
"EndpointID": "b7c9fabcda00ccebd6523f76477b51eba00dd5d3f26940355139fff62d5576bb",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
}
]

This inspect output changes as a network is altered and configured, how to do this is covered in later steps.


4 – Creating Docker Networks

Networks are natural ways to isolate containers from other containers or other networks. The original default networks are not to be solely relied upon however. It’s better to create your own network groups.

Remember there are two default drivers and therefore two native network types; bridge and overlay . Bridge networks can only make use of one singular host to run the Docker Engine software. An overlay network differs in that it can incorporate multiple hosts into running the Docker software.

To make the simpler “bridge” type network we use the create option:

1
$ docker network create -d bridge <new-network-name>

With this last command the -d (driver) and bridge option specifies the network type we want to create. With a new name for the network at the end of the command.

To see the new network after creation:

1
$ docker network ls

Shown on the last line:

Output
1
2
3
4
5
NETWORK ID          NAME                  DRIVER
f9ee6308ecdd bridge bridge
49dab653f349 none null
2d41f8bbf514 host host
08f44ef7de28 test-bridge-network bridge

Overlay networks are a much wider topic due to their inclusion of multiple hosts so aren’t covered in this post but the basic principles and where to start is mentioned in the link below:

Docker Documentation - Working with Network Commands


5 – Connecting Containers to Networks

Creating and using these networks allows container applications to to operate in unison and as securely as possible. Containers inside of networks can only interact with their counterparts and are isolated from the outsides of the network. Similar to VLAN segregation inside of a IP based network.

Usually containers are added to a network when you first launch and run the container. We’ll follow the example from the Docker Documentation that uses a PostgreSQL database container and the Python webapp to demonstrate a simple network configuration.

First launch a container running the PostgreSQL database training image, and in the process add it to your custom made bridge network from the previous step.

To do this we must pass the --net= flag to the new container, and provide it with the name of our custom bridge network. Which in my example earlier was test-bridge-network :

1
$ docker run -d --net=test-bridge-network --name db training/postgres

You can inspect this aptly named db container to see where exactly it is connected:

1
$ docker inspect --format='{{json .NetworkSettings.Networks}}' db

This shows us the network details for the database container’s test-bridge-network connection:

Output
1
{"test-bridge-network":{"EndpointID":"0008c8566542ef24e5e57d5911c8e33a79f0fcb91b1bbdd60d5cdec3217fb517","Gateway":"172.18.0.1","IPAddress":"172.18.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:12:00:02"}}

Next run the Python training web application in daemonised mode with out any extra options:

1
$ docker run -d --name python-webapp training/webapp python app.py

Inspect the python-webapp container’s network connection in the same way as before:

1
$ docker inspect --format='{{json .NetworkSettings.Networks}}' python-webapp

As expected this new container is running under the default bridge network, shown in the output of the last command:

Output
1
{"bridge":{"EndpointID":"e5c7f1c8d097fdafc35b89d7bce576fe01a22709424643505d79abe394a59767","Gateway":"172.17.0.1","IPAddress":"172.17.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02"}}

Docker lets us connect a container to as many networks as we like. More importantly for us we can also connect an already running container to a network.

Attach the running python-webapp container to the “test-bridge-network” like we need:

1
$ docker network connect test-bridge-network python-webapp

To test the container connections to our custom network we can ping from one to the other.

Get the IP address of the db container:

1
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db

In my case this was:

Output
1
172.18.0.2

Now we have the IP address open an interactive shell into the python-webapp container:

1
$ docker exec -it python-webapp bash

Attempt to ping the db container with the IP address from before, substituting 172.18.0.2 for your address equivalent:

1
ping -c 10 172.18.0.2

As long as you successfully connected both containers earlier on, the ping command will be successful:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@fc0f73c129c0:/opt/webapp# ping -c 10 db
PING db (172.18.0.2) 56(84) bytes of data.
64 bytes from db (172.18.0.2): icmp_seq=1 ttl=64 time=0.216 ms
64 bytes from db (172.18.0.2): icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from db (172.18.0.2): icmp_seq=3 ttl=64 time=0.053 ms
64 bytes from db (172.18.0.2): icmp_seq=4 ttl=64 time=0.063 ms
64 bytes from db (172.18.0.2): icmp_seq=5 ttl=64 time=0.065 ms
64 bytes from db (172.18.0.2): icmp_seq=6 ttl=64 time=0.063 ms
64 bytes from db (172.18.0.2): icmp_seq=7 ttl=64 time=0.062 ms
64 bytes from db (172.18.0.2): icmp_seq=8 ttl=64 time=0.064 ms
64 bytes from db (172.18.0.2): icmp_seq=9 ttl=64 time=0.061 ms
64 bytes from db (172.18.0.2): icmp_seq=10 ttl=64 time=0.063 ms

--- db ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8997ms
rtt min/avg/max/mdev = 0.053/0.076/0.216/0.047 ms

Conveniently container names work in the place of an IP address too in this scenario:

1
ping -c 10 db

Press CTRL + D to exit the container prompt, or type in exit instead.

And with that we have two containers on the same user created network able to communicate with each other, and able to share data. Which is what we would be aiming for in the case of the PostgreSQL database and Python webapp.

There’s more ways of sharing data between containers once they are connected through a network, but these are covered in the next post of the series.


6 – Miscellaneous Networking Commands

Here are a few complimentary commands in relation to what has already been covered in this post.

At some point you are likely to need to remove a container from its network. This is done by using the disconnect command:

1
$ docker network disconnect test-bridge-network <container-name>

Here test-bridge-network is the name of the network, followed by which container you want to remove from it.

When all the containers in a network are stopped or disconnected, you can remove networks themselves completely with:

1
$ docker network rm test-bridge-network

Meaning the test-bridge-network is now deleted and absent from the list of existing networks:

Output
1
2
3
4
NETWORK ID          NAME                  DRIVER             
2e38b3a44489 bridge bridge
79d9d21edbec none null
61371e641e1b host host

The output here is garnered from the docker network ls command.


Networking in Docker begins here with these examples but goes a lot further than what we’ve covered. Data volumes, data containers, and mounting host volumes are described in the next post on Docker when it’s released.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z