Tricks of the Trades

Docker - Building Images and Docker Hub (5)

Docker Logo

Preamble

Docker images can be thought of as blueprints and house the software or files required to run your application inside of a container. So far in these Docker posts all container images have been pulled from an online source and no real interaction with the images themselves has been explored.

However in this post we’re taking a very simple Python Flask application and going through the process of dockerising it. Which in non-jargon terms means we are configuring and creating our own custom Docker image, to then run it in a container like any other image. This usually also involves uploading it to Docker Hub for others to pull down and use, so is covered in the guide.

The Docker - Data Volumes and Data Containers (4) post that comes before this one is mostly unrelated so not really a requirement for this post, but still worth checking out overall.


1 – Clone the Repository

The example application used in this post is named “Flaskr” and serves as a very simple messaging board. It allows a user to sign in/out, add new written entries to the message board displayed, and does all this using SQLite as the database backend.

Clone this example application and its code locally.

1
$ git clone https://github.com/5car1z/docker-flaskr.git ~/docker-flaskr

Change your working directory to the new repository.

1
$ cd ~/docker-flaskr

Take a quick glance at the files using:

1
$ ls

Which returns:

Output
1
flaskr.py  flaskr_settings  README  requirements.txt  schema.sql  static  templates  test_flaskr.py

Then move onto the next step.


2 – Configure the Application

Most Flask or Python projects contain a file that holds circumstantial configuration values. These settings differ from user to user and when running the application in development/production environments. To run our Flaskr application and build a successful Docker image later on, we must set the values in this file beforehand.

Open the flaskr_settings file with your preferred text editor.

1
$ vim flaskr_settings

The first two lines containing the database file location and debug status should remain as they are. There is no need to change these for this scenario.

flaskr_settings
1
2
3
# configuration
DATABASE = 'flaskr.db'
DEBUG = False

Generating a secret key for the third line here is easiest using a Python console.

flaskr_settings
1
SECRET_KEY = ''

Back on the command line outside of the editor run:

1
$ python

Import the OS module.

1
>>> import os

Run the associated OS function for generating a string of random bytes (urandom).

1
>>> os.urandom(24)

A 24 byte value is returned as output for use as the secret key in the flaskr_settings file. The key value shown here is for demonstration purposes only.

Example Key Output
1
'\xebqD\x0f\xf3\xcf\xaa\x9e]%\x86\xd7\x11h\x8f\xa3\xa6\xbb=\xf7m\xf2{\xfd'

Copy your own secret key value into the third line of the flaskr_settings file - you can exit the Python console by pressing CTRL + D once the key has been retained.

1
$ vim flaskr_settings
flaskr_settings
1
SECRET_KEY = '\xebqD\x0f\xf3\xcf\xaa\x9e]%\x86\xd7\x11h\x8f\xa3\xa6\xbb=\xf7m\xf2{\xfd'

Note: Only one set of enclosing apostrophes are required: ''

On the last two lines of the configuration file provide a username and password. These details are used for authentication when logging into the app after it is up and running.

Add in your own values.

flaskr_settings
1
2
USERNAME = 'username'
PASSWORD = 'password'

Save your changes to the flaskr_settings file before continuing, and exit the file.

My example entries and file look like this when completed:

flaskr_settings
1
2
3
4
5
6
# configuration
DATABASE = 'flaskr.db'
DEBUG = False
SECRET_KEY = '\xebqD\x0f\xf3\xcf\xaa\x9e]%\x86\xd7\x11h\x8f\xa3\xa6\xbb=\xf7m\xf2{\xfd'
USERNAME = 'scarlz'
PASSWORD = 'password'

3 – Create the Dockerfile

The build process and configuration parameters for our eventual Docker image get defined in a new file named the “Dockerfile”.

Create the new Dockerfile using your text editor again, and place each of the upcoming actions on its own separate line.

1
$ vim Dockerfile

Tell Docker to use the official Python 2.7 image as a base for our own custom image, on the first line.

Dockerfile
1
FROM python:2.7

Define an environment variable that tells Flaskr the name of the configuration file we completed earlier.

Dockerfile
1
ENV FLASKR_SETTINGS flaskr_settings

Add the requirements.txt file to the file-system of the image we are creating.

Dockerfile
1
ADD requirements.txt /tmp/requirements.txt

Install the Flaskr application dependencies onto this image - sourced in from the “requirements” file.

Dockerfile
1
RUN pip install -r /tmp/requirements.txt

Add the current working directory . of the project and its contents to a new directory on the image’s file-system.

Dockerfile
1
ADD . /flask-application

Set the image’s file-system current working directory to the one we are creating.

Dockerfile
1
WORKDIR /flask-application

Open port 5000 on the container so we can map it to a host port later.

Dockerfile
1
EXPOSE 5000

Run the Flaskr app on this image, once the container is launched by the user.

Dockerfile
1
CMD ["python", "flaskr.py", "--host", "0.0.0.0", "--port", "5000"]

The Dockerfile in full:

Dockerfile
1
2
3
4
5
6
7
8
FROM python:2.7
ENV FLASKR_SETTINGS flaskr_settings
ADD requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt
ADD . /flaskr-application
WORKDIR /flaskr-application
EXPOSE 5000
CMD ["python", "flaskr.py", "--host", "0.0.0.0", "--port", "5000"]

Make sure your own file’s contents matches the above, and then save the changes.


4 – Build and Run the Image

Using Docker (which you should already have installed) we’re going to build the custom Flaskr image configured in the previous step.

When entering this next command be aware that ideally the parameters of -t need to be replaced with your own username and preferred image name. The details are used later on when registering the image externally.

1
$ docker build --no-cache -t scarlz/flaskr-application .

Note: The -t option assigns a tag to the image used by Docker Hub or an image registry service.

Give the build process a few minutes to download and carry out the necessary operations, noting its progress via the output.

If the Dockerfile was configured properly in the previous step you’ll get a final output similar to:

Output
1
Successfully built c4e546ed282d

Run the newly built Docker image in a daemonised container, mapping the internal container port 5000 to the host port 32775 so we can view the app locally.

1
$ docker run --name flaskr-container -p 32775:5000 -d scarlz/flaskr-application

Confirm the container has run and is running.

1
$ docker ps

Running container details are returned:

1
2
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                     NAMES
c3937994e66b scarlz/flaskr-application "python flaskr.py --h" 3 seconds ago Up 3 seconds 0.0.0.0:5000->32775/tcp flaskr-container

Preview the Flaskr app in your web browser by visiting:

http://0.0.0.0:32775

Flaskr Homepage Image

Log in to the application with the authentication details from step 2 if you wish to test it out.

Flaskr Internal Form Image


5 – Push the Image to Docker Hub

Once images have been built and tested successfully you may want to make them accessible to others through a public or private Docker registry service. The official registry service open to all provided by Docker is known as “Docker Hub”.

Create a free account by registering with the service at the previous link and then go back to the command line and login using:

1
$ docker login

Enter the authentication details you used to sign up to the service as prompted.

Output
1
2
3
4
Username: 
Password:
Email:
Login Succeeded

Now once successfully authenticated push the image you created earlier to Docker Hub by providing the “tag” name assigned to it.

1
$ docker push scarlz/flaskr-application:latest

The :latest suffix ensures the most recently built version of the image is sent.

Successfully pushing the image will return an output akin to:

1
77e39ee82117: Image successfully pushed

You can see the example image I pushed to Docker Hub at:

https://hub.docker.com/r/5car1z/flaskr-application

Docker Hub - Flaskr


There are many more practices and nuances not covered here when it comes to building, tagging, and pushing to Docker registries. But hopefully this serves as a simple example of how the process can be carried out. Something to bear in mind perhaps is that Docker Hub has in the past been given bad press in terms of performance (most notably speed) although the service continues to improve as time goes on. it is also for these reasons or similar many use third party private registry platforms in its place e.g. Portus

The next and final post in this series takes a glance briefly at some of the extra platforms/toolsets that form up more of the Docker eco-system.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z

Docker - Data Volumes and Data Containers (4)

Docker Logo

Preamble

This blog post is becoming more and more outdated as time goes on, it would be better to consult the official Docker documentation for this kind of thing!

Docker containers are a lot more scalable and modular once they have the links in place that allow them to share data. How these links are created and arranged depends upon the arranger, who will choose either to create a file-system data volume or a dedicated data volume container.

This post works through these two common choices; data volumes and data volume containers. With consideration of the commands involved in backing up, restoring, and migrating said data volumes.

This is post four on Docker following on from Docker - Daemon Administration and Networking (3). Go back and read the latter half of that post to see how to network containers together so they can properly communicate back and forth - if you need to.


1 – Creating Data Volumes

A “data volume” is a marked directory inside of a container that exists to hold persistent or commonly shared data. Assigning these volumes is done when creating a new container.

Any data already present as part of the Docker image in a targeted volume directory is carried forward into the new container and not lost. This however is not true when mounting a local host directory (covered later) as the data is temporarily covered by the new volume.

You can add a data volume to a container using the -v flag in conjunction with the create or run command. You can use the -v multiple times to mount multiple data volumes.

This next command will create a data volume inside a new container in the /webapp directory.

1
$ docker run -d -P --name test-container -v /webapp training/webapp python app.py

Data volumes are very useful as once designated and created they can be shared and included as part of other containers. It’s also important to note that any changes to data volumes are not included when you update an image, but conversely data volumes will persist even if the container itself is deleted.

Note: The VOLUME instruction in a Dockerfile will add one or more new volumes to any containers created from the image.

This preservation is due to the fact that data volumes are meant to persist independent of a container’s life cycle. In turn this also means Docker never garbage collects volumes that are no longer in use by a container.


2 – Creating Host Data Volumes

You can instead mount a directory from your Docker daemon’s host into a container; you may have seen this used once or twice in the previous posts.

Mounting a host directory can be useful for testing. For example, you can mount source code inside a container. Then, change the source code and see its effect on the application in real time. The directory on the host must be specified as an absolute path and if the directory doesn’t exist Docker will automatically create it for you.

The next example command mounts the host directory /src/webapp into the container at the /opt/webapp directory.

1
$ docker run -d -P --name test-container -v /src/webapp:/opt/webapp training/webapp python app.py

Some internal rules and behaviours for this process are:

  • The targeted container directory must always take an absolute full file-system path.

  • The host source directory can be either an absolute path or a name value.

  • If the targeted container path already exists inside the container’s image, the host directory mount overlays but does not remove the destination content. Once the mount is removed, the destination content is accessible again.

Docker volumes default to mounting as both a dual read-write mode, but you can set them to mount as read-only if you like.

Here the same /src/webapp directory is linked again but the extra :ro option makes the mount read-only.

1
$ docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py

Note: It’s not possible to mount a host directory using a Dockerfile because by convention images should be portable and flexible, and a specific host directory might not be available on all potential hosts.


3 – Mounting Individual Host Files

The -v flag used so far can target a single file instead of entire directories from the host machine. This is done by mapping the specific file on each side of the container.

A great interactive example of this that creates a new container and drops you into a bash shell with your bash history from the host, is as follows:

1
$ docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash

Furthermore when you exit the container, the host version of the file will have the the commands typed from the inside of the container - written to the the .bash_history file.


4 – Creating Dedicated Data Volume Containers

A popular practice with Docker data sharing is to create a dedicated container that holds all of your persistent shareable data resources, mounting the data inside of it into other containers once created and setup.

This example taken from the Docker documentation uses the postgres SQL training image as a base for the data volume container.

1
$ docker create -v /data-store --name data-store training/postgres /bin/true

Note: /bin/true - returns a 0 and does nothing if the command was successful.

The --volumes-from flag is then used to mount the /data-store volume inside of other containers:

1
$ docker run -d --volumes-from data-store --name database-container-1 training/postgres

This process is repeated for additional new containers:

1
$ docker run -d --volumes-from data-store --name database-container-2  training/postgres

Be aware that you can use multiple --volumes-from flags in one command to combine data volumes from multiple other dedicated data containers.

An alternative idea is to mount the volumes from each subsequent container to the next, instead of the original dedicated container linking to new ones.

This forms a chain that would begin by using:

1
$ docker run -d --name database-container-3 --volumes-from database-container-2  training/postgres

Remember that If you remove containers that mount volumes, the volume store and its data will not be deleted. Docker preserves it.

To fully delete a volume from the file-system you must run:

1
$ docker rm -v <container name>

Where <container name> is “the last container with a reference to the volume.”

Note: There is no cautionary Docker warning provided when removing a container without the -v option. So if a container has volumes mounted the -v must be passed to fully remov them.

Dangling Volumes

“Dangling volumes” refers to container volumes that are no longer referenced by a container.

Fortunately there is a command to list out all the stray volumes on a system.

1
$ docker volume ls -f dangling=true

To remove a volume that’s no longer needed use:

1
$ docker volume rm <volume name>

Where <volume name> is substituted for the dangling volume name shown in the previous ls output.


5 – Backing Up and Restoring Data Volumes

How are data volumes maintained when it comes to things like backups, restoration, and migration? Well here is one solution that takes care of these necessities by showing how you can achieve this with a dedicated data container.

To backup a volume:

1
$ docker run --rm --volumes-from data-container -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /data-store

Here’s how the previous command works:

  1. The --volumes-from flag creates a new nameless container that mounts the data volume inside data-container you wish to backup.
  2. A localhost directory is mounted as /backup . Then tar archives the contents of the /data-store volume to a backup.tar file inside the local /backup directory.
  3. The container will be --rm removed once it eventually ends and exits.

We are left with a backup of the /data-store volume on the localhost.

From here you could restore the volume in whatever way you wish.

To restore into a new container run:

1
$ docker run -v /data-store --name data-container-2 ubuntu /bin/bash

Then extract the backup file contents into the the new container’s data volume:

1
$ docker run --rm --volumes-from data-container-2 -v $(pwd):/backup ubuntu bash -c "cd /data-store && tar -xvf /backup/backup.tar"

Now the new container is up and running with the files from the original /data-store volume.


6 – Volume and Data Container Issues

  • Orphan Volumes – Referred to as dangling volumes earlier on. These are the leftover untracked volumes that aren’t removed from the system once a container is removed/deleted.

  • Security – Other than the usual Unix file permissions and the ability to set read-only or read-write privileges. Docker volumes or data containers have no additional security placed on them.

  • Data Integrity – Sharing data using volumes and data containers provides no level of data integrity protection. Data protection features are not yet built into Docker i.e. data snapshot, automatic data replication, automatic backups, etc. So data management has to be handled by the administrator or the container itself.

  • External Storage – The current design does not take into account the ability to use a Docker volume spanning from one host to another. They must be on the same host.


It seems like a large amount of information has been covered here but really only two ideas have been explored. That of singular data volumes and that of the preferred independent data container. There are also new updates to Docker on the horizon as always so some of the issues raised here are hopefully soon to be resolved. The next post on Docker covers building images using Dockerfiles, and likewise with Docker Compose.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z

Docker - Daemon Administration and Networking (3)

Docker Logo Image

Preamble

This time we are beginning by centering around the Docker daemon and how it interacts with various process mangers from different platforms. Followed up by an introduction to networking in Docker that uses more of the Docker training images to link together and create a basic network of containers. Specifically a PostgreSQL database container and a Python webapp container.

This is post three on Docker following on from Docker - Administration and Container Applications (2). If you’re looking for more generalised administration and basic example uses of the Docker Engine CLI then you may want to read that post instead.


1 – Docker Daemon Administration

The Docker daemon is the background service that handles running containers and all their states.

The starting and stopping of the Docker daemon is often configured through a process manager like systemd or Upstart. In a production environment this is very useful as you have a lot of customisable control over the behaviour of the daemon.

It can be run directly from the command line though instead of this:

1
$ docker daemon

It listens on the Unix socket - unix:///var/run/docker.sock when active and running.

If you’re running the docker daemon directly like this you can append configuration options to the command.

An example of running the docker daemon with configuration options is as follows:

1
$ docker daemon -D --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376
  • -D --debug=false – Enable or disable debug mode.
  • --tls=false – Enable or disable TLS.
  • --tlscert= – certificate location.
  • tlskey= – key location.
  • -H --host=[] – Daemon socket(s) to connect to.

More options are on offer for the Docker daemon at the link before the last code block.

Upstart

The default Docker daemon Upstart job is found in /etc/init/docker.conf .

To check the status of the daemon:

1
$ sudo status docker

To start the Docker daemon:

1
$ sudo start docker

Stop the Docker daemon:

1
$ sudo stop docker

Or restart the daemon:

1
$ sudo restart docker

Logs for Upstart jobs are found in /var/log/upstart and are compressed when the daemon is not running. So run the daemon/container to read the active log file - docker.log via:

1
$ sudo tail -fn 15 /var/log/upstart/docker.log

systemd

Default unit files are stored in the subdirectories of /usr/lib/systemd and /lib/systemd/system . Custom user created unit files are kept in /etc/systemd/system .

To check the status of the daemon:

1
$ sudo systemctl status docker

To start the Docker daemon:

1
$ sudo systemctl start docker

Stop the Docker daemon:

1
$ sudo systemctl stop docker

Or restart the daemon:

1
$ sudo systemctl restart docker

To ensure the Docker daemon starts at boot:

1
$ sudo systemctl enable docker

Logs for Docker are viewed in systemd with:

1
$ journalctl -u docker

A more in-depth look at systemd and Docker is kept here in the Docker docs:

Docker Documentation - systemd


2 – Process Manager Container Automation

Restart policies are an in-built Docker mechanism for restarting containers automatically when they exit. These must be set manually with the flag - --restart="yes" and are also triggered when the Docker daemon starts up (like after a system reboot). Restart policies start linked containers in the correct order too.

If you have non-Docker processes that depend on Docker containers you can use a process manager like upstart, systemd or supervisor instead of these restart policies to replace this functionality.

This is what we will cover in this step.

Note: Be aware that process mangers will conflict with Docker restart policies if they are both in action So don’t run restart policies if you are using a process manager.

For these examples assume that the container’s for each have already been created and are running Ghost with the name --name=ghost-container .

Upstart

/etc/init/ghost.conf
1
2
3
4
5
6
7
8
description "Ghost Blogging Container"
author "Scarlz"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
/usr/bin/docker start -a ghost-container
end script

Docker automatically attaches the process manager to the running container, or starts it if needed with this setup.

All signals from Docker are also forwarded so that the process manager can detect when a container stops, to correctly restart it.

If you need to pass options to the containers (such as --env) then you’ll need to use docker run rather than docker start in the job configuration.

For Example:

/etc/init/ghost.conf
1
2
3
script
/usr/bin/docker run --env foo=bar --name ghost-container ghost
end script

This differs as it creates a new container using the ghost image every time the service is started and takes into account the extra options.

systemd

/etc/systemd/system/ghost
1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Ghost Blogging Container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a ghost-container
ExecStop=/usr/bin/docker stop -t 2 ghost-container

[Install]
WantedBy=local.target

Docker automatically attaches the process manager to the running container, or starts it if needed with this setup.

All signals from Docker are also forwarded so that the process manager can detect when a container stops, to correctly restart it.

If you need to pass options to the containers (such as --env), then you’ll need to use docker run rather than docker start in the job configuration.

For Example:

/etc/systemd/system/ghost
1
2
ExecStart=/usr/bin/docker run --env foo=bar --name ghost-container ghost
ExecStop=/usr/bin/docker stop -t 2 ghost-container ; /usr/bin/docker rm -f ghost-container

This differs as it creates a new container with the extra options every time the service is started, which stops and removes itself when the Docker service ends.


3 – Docker Networks

Network drivers allow containers to be linked together and networked. Docker comes with two default network drivers as part of the normal installation:

  • The bridge driver.
  • The overlay driver.

These two drivers are replaceable with other third party drivers that perform more optimally in different situations. But for low end basic Docker use these given defaults are fine.

Docker also automatically includes three default networks with the base install:

1
$ docker network ls

Listing them as:

Output
1
2
3
4
NETWORK ID          NAME                DRIVER
2d41f8bbf514 host host
f9ee6308ecdd bridge bridge
49dab653f349 none null

The network named bridge is classed as a special network. Docker launches any and all containers in this network (unless told otherwise).

So if you currently you have containers running these will have been placed into the bridge network group.

Networks can be inspected using the next command, where bridge is the network name to be inspected:

1
$ docker network inspect bridge

The output shows any and all configured directives for the network:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[
{
"Name": "bridge",
"Id": "f9ee6308ecdd5dc5a588428469de1b7c475fdafdab49cfc33c1c3ac0bf0559ab",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Containers": {
"ff98b5ed01dd4323f0ce38af9b8cea2d49d0b1e194cf147a3a8f632278a11451": {
"EndpointID": "b7c9fabcda00ccebd6523f76477b51eba00dd5d3f26940355139fff62d5576bb",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
}
]

This inspect output changes as a network is altered and configured, how to do this is covered in later steps.


4 – Creating Docker Networks

Networks are natural ways to isolate containers from other containers or other networks. The original default networks are not to be solely relied upon however. It’s better to create your own network groups.

Remember there are two default drivers and therefore two native network types; bridge and overlay . Bridge networks can only make use of one singular host to run the Docker Engine software. An overlay network differs in that it can incorporate multiple hosts into running the Docker software.

To make the simpler “bridge” type network we use the create option:

1
$ docker network create -d bridge <new-network-name>

With this last command the -d (driver) and bridge option specifies the network type we want to create. With a new name for the network at the end of the command.

To see the new network after creation:

1
$ docker network ls

Shown on the last line:

Output
1
2
3
4
5
NETWORK ID          NAME                  DRIVER
f9ee6308ecdd bridge bridge
49dab653f349 none null
2d41f8bbf514 host host
08f44ef7de28 test-bridge-network bridge

Overlay networks are a much wider topic due to their inclusion of multiple hosts so aren’t covered in this post but the basic principles and where to start is mentioned in the link below:

Docker Documentation - Working with Network Commands


5 – Connecting Containers to Networks

Creating and using these networks allows container applications to to operate in unison and as securely as possible. Containers inside of networks can only interact with their counterparts and are isolated from the outsides of the network. Similar to VLAN segregation inside of a IP based network.

Usually containers are added to a network when you first launch and run the container. We’ll follow the example from the Docker Documentation that uses a PostgreSQL database container and the Python webapp to demonstrate a simple network configuration.

First launch a container running the PostgreSQL database training image, and in the process add it to your custom made bridge network from the previous step.

To do this we must pass the --net= flag to the new container, and provide it with the name of our custom bridge network. Which in my example earlier was test-bridge-network :

1
$ docker run -d --net=test-bridge-network --name db training/postgres

You can inspect this aptly named db container to see where exactly it is connected:

1
$ docker inspect --format='{{json .NetworkSettings.Networks}}' db

This shows us the network details for the database container’s test-bridge-network connection:

Output
1
{"test-bridge-network":{"EndpointID":"0008c8566542ef24e5e57d5911c8e33a79f0fcb91b1bbdd60d5cdec3217fb517","Gateway":"172.18.0.1","IPAddress":"172.18.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:12:00:02"}}

Next run the Python training web application in daemonised mode with out any extra options:

1
$ docker run -d --name python-webapp training/webapp python app.py

Inspect the python-webapp container’s network connection in the same way as before:

1
$ docker inspect --format='{{json .NetworkSettings.Networks}}' python-webapp

As expected this new container is running under the default bridge network, shown in the output of the last command:

Output
1
{"bridge":{"EndpointID":"e5c7f1c8d097fdafc35b89d7bce576fe01a22709424643505d79abe394a59767","Gateway":"172.17.0.1","IPAddress":"172.17.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02"}}

Docker lets us connect a container to as many networks as we like. More importantly for us we can also connect an already running container to a network.

Attach the running python-webapp container to the “test-bridge-network” like we need:

1
$ docker network connect test-bridge-network python-webapp

To test the container connections to our custom network we can ping from one to the other.

Get the IP address of the db container:

1
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db

In my case this was:

Output
1
172.18.0.2

Now we have the IP address open an interactive shell into the python-webapp container:

1
$ docker exec -it python-webapp bash

Attempt to ping the db container with the IP address from before, substituting 172.18.0.2 for your address equivalent:

1
ping -c 10 172.18.0.2

As long as you successfully connected both containers earlier on, the ping command will be successful:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@fc0f73c129c0:/opt/webapp# ping -c 10 db
PING db (172.18.0.2) 56(84) bytes of data.
64 bytes from db (172.18.0.2): icmp_seq=1 ttl=64 time=0.216 ms
64 bytes from db (172.18.0.2): icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from db (172.18.0.2): icmp_seq=3 ttl=64 time=0.053 ms
64 bytes from db (172.18.0.2): icmp_seq=4 ttl=64 time=0.063 ms
64 bytes from db (172.18.0.2): icmp_seq=5 ttl=64 time=0.065 ms
64 bytes from db (172.18.0.2): icmp_seq=6 ttl=64 time=0.063 ms
64 bytes from db (172.18.0.2): icmp_seq=7 ttl=64 time=0.062 ms
64 bytes from db (172.18.0.2): icmp_seq=8 ttl=64 time=0.064 ms
64 bytes from db (172.18.0.2): icmp_seq=9 ttl=64 time=0.061 ms
64 bytes from db (172.18.0.2): icmp_seq=10 ttl=64 time=0.063 ms

--- db ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8997ms
rtt min/avg/max/mdev = 0.053/0.076/0.216/0.047 ms

Conveniently container names work in the place of an IP address too in this scenario:

1
ping -c 10 db

Press CTRL + D to exit the container prompt, or type in exit instead.

And with that we have two containers on the same user created network able to communicate with each other, and able to share data. Which is what we would be aiming for in the case of the PostgreSQL database and Python webapp.

There’s more ways of sharing data between containers once they are connected through a network, but these are covered in the next post of the series.


6 – Miscellaneous Networking Commands

Here are a few complimentary commands in relation to what has already been covered in this post.

At some point you are likely to need to remove a container from its network. This is done by using the disconnect command:

1
$ docker network disconnect test-bridge-network <container-name>

Here test-bridge-network is the name of the network, followed by which container you want to remove from it.

When all the containers in a network are stopped or disconnected, you can remove networks themselves completely with:

1
$ docker network rm test-bridge-network

Meaning the test-bridge-network is now deleted and absent from the list of existing networks:

Output
1
2
3
4
NETWORK ID          NAME                  DRIVER             
2e38b3a44489 bridge bridge
79d9d21edbec none null
61371e641e1b host host

The output here is garnered from the docker network ls command.


Networking in Docker begins here with these examples but goes a lot further than what we’ve covered. Data volumes, data containers, and mounting host volumes are described in the next post on Docker when it’s released.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z

Docker - Administration and Container Applications (2)

Docker Logo Image

Preamble

In this post we run a python program in a Docker container sourced from the user guide. Look at the various commands that come into play when administering containers, and then briefly setup some real world applications with Docker.

This will be the second post on Docker following on from Docker - Installing and Running (1). If you’re brand new to Docker then the first post linked helps to introduce some of its concepts and theory to better understand the utilities it can provide.


1 – Example Container Application

Pull this training image from the Docker user guide:

1
$ docker run -d -P training/webapp python app.py

The -d option tells Docker to daemonise and run the container in the background. -P maps any required network ports inside the container to your host, and the Python application inside is also executed at the end.

Run the Docker process command to see running container details:

1
$ docker ps

The “webapp” image container shows network ports that have been mapped as part of the image configuration:

Output
1
2
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
b8a16d8e94cc training/webapp "python app.py" 2 minutes ago Up 2 minutes 0.0.0.0:32768->5000/tcp nostalgic_knuth

In my example here port 5000 (the default Python Flask port) inside the container has been exposed on the host ephemeral TCP port 32768 . Ephemeral port ranges are temporary short lived port numbers which typically range anywhere from 32768 to 61000. These are dynamically used and are never set in stone.

The Docker image decides all this for us, but as an aside it’s also possible to manually sets the ports to use by a container.

This command assigns port 80 on the local host to port 5000 inside the container:

1
$ docker run -d -p 80:5000 training/webapp python app.py

It’s important never to map ports in a 1:1 fashion i.e. 5000->5000/tcp as if we needed multiple containers running the same image, the traffic will use the same host port (5000) and only be accessible one instance at a time.

If you like you can check the original Python docker container’s port is working by accessing:

http://localhost:32768 or http://your.hosts.ip.address:32768 in a browser.

Where the port number 32768 is set to your own example container’s ephemeral port.

Another way to see this example containers image’s port configuration is:

1
$ docker port <container-name>

Showing:

Output
1
32768->5000/tcp

To see the front facing host machine’s mapped ports individually add the number of the internal port to the end of the command:

1
$ docker port <container-name> 5000

Which shows:

Output
1
0.0.0.0:32768

Now we have this example container up and running we’ll go through multiple administrative commands that are important for when working with containers. These commands if you wish can be tested with the example container, or even better with multiple instances of it. Each and ever command shown may not be completely applicable however.


2 – Administrative Commands

Here’s a list of select Docker commands to refer to when playing around with or monitoring containers. There are even more to check out as this list is by no means exhaustive.

A few core commands were already mentioned in Docker - Installing and Running (1) so won’t appear here.

The first command allows you to attach to a running container interactively using the container’s ID or name:

1
$ docker attach <container-name>

You can detach again from the container and leave it running with CTRL + P or CTRL + Q for a quiet exit.

To list the changed files and directories in a container᾿s filesystem use diff:

1
$ docker diff <container-name>

Where in the output the three “event types” are tagged as either:

  • A - Add
  • D - Delete
  • C - Change

For real-time container and image activity begin a feed of event output with:

1
$ docker events

The exec command runs a command of your choosing inside a container without dropping you down into a shell inside the container.

This example creates a container named ubuntu_bash and starts a Bash session that runs the touch command:

1
$ docker exec -d ubuntu_bash touch /tmp/execWorks

Backing up a containers internal file-system as a tar archive is carried out using the “export“ command:

1
$ docker export <container-name> > backup-archive.tar

Show the internal history of an image with human readable -H values:

1
$ docker history -H <image-name>

To display system wide Docker info and statistics use:

1
$ docker -D info

Return low-level information on a container or image using inspect:

1
$ docker inspect

You can filter with the inspect command by adding the parameters described on the previously linked page.

Use SIGKILL to kill a running container, caution as usual is advised with this:

1
$ docker kill <container-name>

Pause and unpause all running processes in a Docker container:

1
2
$ docker pause
$ docker unpause

If the auto-generated names are not to your taste rename containers like this:

1
$ docker rename <container-name> <new-name>

Alternatively when first creating/running a container --name sets the name from the onset:

1
$ docker run --name <container-name> -d <image-name>

Enter a real-time live feed of one or more containers resource usage stats:

1
$ docker stats <container-name>

Docker has its own top command for containers, to see the running processes inside:

1
$ docker top <container-name>

That’s all for these. Some real world examples of running images from the official Docker Hub repositories are now covered briefly to serve as realistic examples for how you might want to use Docker and its containerisation.

Be mindful that these are not walk-throughs on fully setting up each service, but general starting points for each.


3 – Ghost Image Container

“Ghost is a free and open source blogging platform written in JavaScript.”

To pull the image itself:

1
$ docker pull ghost

To run a basic Ghost instance named ghost-blog-name on the mapped port 2368 use:

1
$ docker run --name <container-name> -p 8080:2368 -d ghost

Then access the blog via http://localhost:8080 or http://your.hosts.ip.address:8080 in a browser.

Ghost Default Blog Image

The image can also be pointed to existing Ghost content on your local host:

1
$ docker run --name <container-name> -v /path/to/ghost/blog:/var/lib/ghost ghost

Docker Hub - Ghost


4 – irssi Image Container

“irssi is a terminal based IRC client for UNIX systems.”

I’m not sure about the benefits of running your irssi client through Docker but to serve as another example we’ll go through the Docker Hub provided setup process:

Create an interactive shell session in a new container named whatever you choose whilst setting an environment variable named TERM that is retrieved from the host. The user ID is set with -u and group ID is set with the -g option:

1
$ docker run -it --name <container-name> -e TERM -u $(id -u):$(id -g) \

Then stop the log driver to avoid storing “useless interactive terminal data”:

1
> --log-driver=none \

Mount and bind the hosts /.irssi config home directory to the internal container equivalent:

1
> -v $HOME/.irssi:/home/user/.irssi:ro \

Mount and bind the hosts /localtime directory to the internal container equivalent:

1
> -v /etc/localtime:/etc/localtime:ro \

Pull down and apply all the previous commands to the irssi image from Docker Hub:

1
> irssi

As everyone who uses irssi has their own configuration for the program this image does not come with any provided pre-sets. So you have to set this up yourself. Other than this you are dropped into the irssi session within the new container.

irssi Containerised Image

Docker Hub - irssi


5 – MongoDB Image Container

“MongoDB document databases provide high availability and easy scalability.”

The standard command to pull the image and container is one we’re familiar with by now:

1
$ docker run --name <mongo-container-name> -d mongo

This image is configured to expose port 27017 (Mongo’s default port), so linking other containers to it will make it automatically available.

In brief this is how to link a new container to a Mongo container named mongo-container-name. The image at the end is the application/service the new container will run:

1
$ docker run --name <new-container-name> --link <mongo-container-name>:mongo -d <image-name>

Using inspect with grep shows the link:

1
$ docker inspect nginx-container | grep -i -A1 "links"

With the output in my case being:

Output
1
2
"Links": [
"/mongo-container:/nginx-container/mongo"

Docker Hub - MongoDB


6 – NGINX Image Container

“Nginx (pronounced “engine-x”) is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server).”

As usual like with all these images to download/pull:

1
$ docker pull nginx

A basic example is given of some static HTML content served from a directory (~/static-content-dir) that has been mounted onto the NGINX hosting directory within the new container:

1
$ docker run --name <container-name> -v ~/static-content-dir:/usr/share/nginx/html:ro -P -d nginx

Whichever port is auto-assigned to the NGINX container can be used to access the static HTML content.

Find out the port number using either docker ps or:

1
$ docker port <container-name>

For our purpose here we want the second line’s port which in my case is 327773 - as shown:

Output
1
2
443/tcp -> 0.0.0.0:32772
80/tcp -> 0.0.0.0:32773

http://localhost:32773 or http://your.hosts.ip.address:32773 in a browser on the localhost now returns:

32773 Port Image

The same idea but with a Dockerfile is better, one that is located in the directory containing our static HTML content:

1
$ vim ~/static-content-dir/Dockerfile

Type in:

~/static-content-dir/Dockerfile
1
2
FROM nginx
COPY . /usr/share/nginx/html

Then build a new image with the Dockerfile and give it a suitable name; nginx-custom-image is what I’m using for this example:

1
$ docker build -t nginx-custom-image ~/static-content-dir/

If this is successful output in this form is given:

1
2
3
4
5
6
7
Sending build context to Docker daemon 6.372 MB
Step 1 : FROM nginx
---> 5328fdfe9b8e
Step 2 : COPY . /usr/share/nginx/html
---> a4bf297e4dcc
Removing intermediate container 7a213493723d
Successfully built a4bf297e4dcc

All that’s left is to run the custom built image, this time with a more typical, user provided port number:

1
$ docker run -it --name <container-name> -p 8080:80 -d nginx-custom-image

Again accessing http://localhost:8080 or http://your.hosts.ip.address:8080 in a browser on the localhost shows the static HTML web pages:

8080 Port Image

Docker Hub - NGINX


7 – Apache httpd (2.4) Image Container

To serve static HTML content in a directory named static-content-dir on port 32775 of the local host machine we can use:

1
$ docker run -it --name <container-name> -v ~/static-content-dir:/usr/local/apache2/htdocs/ -p 32755:80 -d httpd:2.4

Visiting http://localhost:32755 or http://your.hosts.ip.address:32755 in a browser on the localhost then returns:

Port 32755 Image

With a Dockerfile for configuration, custom setups can be applied. Create the Dockerfile in the project directory where the static content is hosted from:

1
$ vim ~/static-content-dir/Dockerfile

Add lines like the below, where line 2 copies a httpd config file from the current working directory, to the internal container’s version. And line 3 copies the entirety of the current working directory (the static HTML files) to the Apache container’s web hosting directory:

~/static-content-dir/Dockerfile
1
2
3
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
COPY . /usr/local/apache2/htdocs/

Note: If the my-httpd:2.4 configuration file is missing, the next command to build the image will fail.

Build the new custom Apache image defined in the Dockerfile and give it the name custom-apache-image which you can of course change if you like:

1
$ docker build -t custom-apache-image ~/static-content-dir/

Successful output for the image build sequence looks like this (or similar):

Output
1
2
3
4
5
6
7
Sending build context to Docker daemon 6.372 MB
Step 1 : FROM httpd:2.4
---> 1a49ac676c05
Step 2 : COPY . /usr/local/apache2/htdocs/
---> f7052ffe8190
Removing intermediate container 53311d3ac0a5
Successfully built f7052ffe8190

Lastly, start and run a new container using the custom generated image on port 32756 of the localhost machine:

1
$ docker run -it --name <container-name> -p 32756:80 -d custom-apache-image

Visiting http://localhost:32756 or http://your.hosts.ip.address:32756 in a browser on the localhost now returns:

Port 32756 Image

Docker Hub - httpd


8 – Jenkins Image Container

Create a new directory in your user’s home directory for the Jenkins config files. This will be mounted and mapped to the container’s equivalent configuration space:

1
$ mkdir ~/jenkins_home

Run the Jenkins image mapping the two internal ports to ephermal ports on the host side, whilst syncing the config directory we just created to the new container:

1
$ docker run --name <container-name> -p 32790:8080 -p 32791:50000 -v ~/jenkins_home:/var/jenkins_home -d jenkins

Jenkins can be seen at the first port number we mapped. In my example it was 32790 meaning a URL of http://localhost:32790 or http://your.hosts.ip.address:32790 in a browser takes us to the Jenkins application page:

Jenkins on Port 32790

Docker Hub - Jenkins


Remember that there are unofficial image repositories to be found on Docker Hub too, and potentially elsewhere when made available.

The third post on Docker talks a bit more about administration with Docker. As well as details based around how to network containers together.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z

Docker - Installing and Running (1)

Docker Logo

Preamble

Docker is an open-source project that enables a Linux application along with its dependencies to be packaged as a container. This helps enable flexibility and portability around where an application can be run and served. These dedicated low level “containers” also improve performance and overhead through using resource isolation features of the Linux kernel. On a large scale level its strength comes from the automation and deployment of software applications and services.

This post covers the fundamentals of Docker and aims to demonstrate how to understand and work with the basics using the Linux command line.


1 – Installing Docker

The first “main method” in this step should always ensure you have an up to date version of Docker installed on your system, and is the recommended route to installing Docker. It uses an install script from one of their official domains, and is suggested by the developers.

Main Method

“Docker requires a 64-bit installation regardless of your Debian/Ubuntu version. Additionally, your kernel must be 3.10 at minimum. The latest 3.10 minor version or a newer maintained version is also acceptable.”

Install using ether:

1
$ wget -qO- https://get.docker.com/ | sh

Or:

1
$ curl -sSL https://get.docker.com/ | sh

Then start the docker service:

Arch Linux

1
2
$ sudo systemctl start docker
$ sudo systemctl enable docker

Debian / Ubuntu

1
$ sudo service docker start

Package Managers Method

It’s advised to use the first “main method” of installing Docker in order to always install the latest version, and not to use the build packages included in your Linux system’s package manager.

Here’s how to use up to date package versions manually if you do wish to do so however.

Arch Linux

Using pacman package manager:

1
$ sudo pacman -S docker

Using an Aurum helper like Yaourt you can get access to another package that is built off of the Docker Git master branch:

1
$ sudo yaourt -S docker-git

Remember to start the docker service:

1
2
$ sudo systemctl start docker
$ sudo systemctl enable docker

More information can be found at:

Docker Installation Documentation - Arch Linux

Debian

For Debian to get an up to date version of Docker you must add and update the apt repository by following the steps here:

Docker Installation Documentation - Debian

Then run:

1
2
3
$ sudo apt-get update
$ sudo apt-get install docker-engine
$ sudo service docker start

Ubuntu

For Ubuntu to get an up to date version of Docker you must add and update the apt repository, and set up any of the relevant “prerequisites” by following the steps here:

Docker Installation Documentation - Ubuntu

Then run:

1
2
3
$ sudo apt-get update
$ sudo apt-get install docker-engine
$ sudo service docker start

2 – Docker User Group

After installing Docker a message is returned that reads:

Output
1
2
3
4
5
6
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

sudo usermod -aG docker <username>

Remember that you will have to log out and back in for this to take effect!

This if followed allows you to run Docker as a non-root user; with the user you add to the group. While this may seem like a great idea, be aware of the potential implications involved in adding users to this group.

The Docker daemon requires root privileges to carry it out its work, so only trusted users and user accounts should be added to this group, and given control of the Docker daemon.

Read this page on Docker security for reasoning and a better explanation of why this is potentially a security risk:

Docker Daemon Attack Surface

Once you understand this if you want to add your Linux user to the Docker group use the next command.

Where scarlz is replaced by the username you wish to add.

1
$ sudo usermod -aG docker scarlz
  • Now log out and back into your Linux system.

The rest of this tutorial assumes your are entering the commands in each step as a user that has been added to the docker group. IF you did not do this simply append sudo to each Docker command where necessary.


3 – Containers & Images

Docker uses the concept of containers to provide the benefits and functionality mentioned in the preamble. A container is a very lightweight bare-bones stripped version of an operating system (Linux OS). Only containing the necessary essential parts for whatever purpose it needs to server.

Images are loaded into a container and are the service or program you want to be run as a docker process. This could be anything from personally created custom images to official web servers, databases, etc. The official images of these are held by “Docker Hub” which is explained more in an upcoming section.

Enter this command to run an example “hello world” image in a Docker container:

1
$ docker run hello-world

This is what the hello-world image should have output:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com

For more examples and ideas, visit:
https://docs.docker.com/userguide/

Here’s also what happened internally with the previous command:

hello-world Docker Image

  • Docker checked to see if the hello-world software image was present on the host machine’s local file-system.
  • The hello-world image was not found so Docker downloaded the image from Docker Hub.
  • After it finished downloading it was then loaded as an image into a container and “run” successfully.

Images can be very simple (like this one) or can be designed to carry out more complex high-level tasks.

It can also be tempting to think of containers as “lightweight” virtual machines, and although this is generally a good analogy to some small degree, it does not account for and explain everything Docker containers are about.


4 – Docker Hub

Who built the “hello-world” software image? Docker did, but anyone can and is welcome to contribute to the online catalogue of Docker images known colloquially as Docker Hub.

Docker Hub Official Repositories

The Docker Hub houses most of the more familiar and popular software and services you’re accustomed to. We will look one or two of these later.

For now pull a second example image from Docker Hub by typing into the search-bar whalesay on the Docker Hub website, and then finding the official docker/whalesay image.

The page for it looks like this:

Whalesay Dock Hub Page

Image descriptions, pull commands, instructions, and owners are always listed here for any type of image.

Run the command as shown by the page:

1
$ docker run docker/whalesay cowsay boo

Which runs the image in a container resulting in:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Unable to find image 'docker/whalesay:latest' locally
latest: Pulling from docker/whalesay

2880a3395ede: Pull complete
515565c29c94: Pull complete
98b15185dba7: Pull complete
2ce633e3e9c9: Pull complete
35217eff2e30: Pull complete
326bddfde6c0: Pull complete
3a2e7fe79da7: Pull complete
517de05c9075: Pull complete
8f17e9411cf6: Pull complete
ded5e192a685: Pull complete
Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b
Status: Downloaded newer image for docker/whalesay:latest
_____
< boo >
-----
\
\
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/

Furthermore here’s the dockerfile (configuration file) for the whalesay image we just ran:

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
FROM ubuntu:14.04

# install cowsay, and move the "default.cow" out of the way so we can overwrite it with "docker.cow"
RUN apt-get update && apt-get install -y cowsay --no-install-recommends && rm -rf /var/lib/apt/lists/* \
&& mv /usr/share/cowsay/cows/default.cow /usr/share/cowsay/cows/orig-default.cow

# "cowsay" installs to /usr/games
ENV PATH $PATH:/usr/games

COPY docker.cow /usr/share/cowsay/cows/
RUN ln -sv /usr/share/cowsay/cows/docker.cow /usr/share/cowsay/cows/default.cow

CMD ["cowsay"]

it’s also possible to search for images by keyword from the command line:

1
$ docker search ubuntu

This like the website search shows you a list of the currently available public repositories on the Docker Hub which match the provided keyword.

Some final considerations for Docker Hub are:

  • You can sign up for a free Docker Hub account to upload your own Docker images. Private repositories for organisation wide images also exist.

  • Automated Builds allow you to auto-create new images when you make changes to a targeted source, GitHub repo, or Bitbucket repo.

  • Inbuilt webhooks let you trigger actions after a successful push to a repository (or successful automated build).

  • General GitHub and Bitbucket integration adds the Hub and Docker images to your current workflows.


5 – Running Docker Containers

In this step there’s more containers to be run alongside the specifics of how Docker command structures work.

To start with run yet another example command:

1
$ docker run ubuntu:14.04 /bin/echo 'Hello world'

In this last command docker run starts us off by creating a brand new container, just like before. Then the image we asked for is this time not just a program/service but ubuntu:14.04 - an entire Linux OS environment. Bear in mind the quote from earlier!

As we don’t have this Docker image (it’s still classed and used as an image by Docker) it must be downloaded from Docker Hub.

After this we asked Docker to run a shell command inside the new container’s environment, which was:

/bin/echo "Hello World"

We then saw the result of this whole sequence as output on the command line.

Output
1
Hello world

Docker containers only run as long as the command you specify is active. So as soon as Hello World was echoed, the container stopped.

Try this new command that has some options and calls a shell.

1
$ docker run -t -i ubuntu:14.04 /bin/bash

The -t option assigns/begins a pseudo-tty or terminal inside the new container.

The -i option allows us to make an interactive connection by grabbing the standard in (STDIN) of the container.

Which as we’ve called bash shell will take us to a bash command prompt inside of the container.

The usual commands will work in this shell and file-system. It’s just a contained version of a regular Ubuntu OS environment.

Use exit or CTRL + D to end the session, thereby also ending and stopping this container.

Although these are useful examples most of the time it’s more likely and common you’ll use Docker with “daemonised” programs or services.

Here’s how this looks with the “Hello World” example:

1
$ docker run -d ubuntu:14.04 /bin/sh -c "while true; do echo hello world; sleep 1; done"

The differences here are the command line options and do while loop.

The -d option tells Docker to run the container but put it in the background, to “daemonise” it. The loop contains a command that permanently echoes “hello world” into the container with a short delay in between.

You will have noticed that as it’s been daemonised, instead of seeing any output, a long string comprised of digits and numbers has been returned:

Output
1
f54eb8a426307e63684040eee69e0a6cf43859bee08c3f6a9086b195213df052

This is known as a container ID; it uniquely identifies a container instance so we can work with it.

This next step shows how you can monitor, examine, and find out what’s going on inside of daemonised containers.


6 – Docker CLI Client

We can use the container ID to see what’s happening inside of our daemonised hello world container.

First let’s list all of the currently running containers and their details:

1
$ docker ps

The docker ps command queries the Docker daemon for information about all the containers it has tied to it.

Output
1
2
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
f54eb8a42630 ubuntu:14.04 "/bin/sh -c 'while tr" 11 minutes ago Up 10 minutes amazing_jepsen

Note the shortened container ID in the first column, and the auto-assigned name in the end column.

To check on the inner workings of the container and see if our earlier “hello world” loop is still running use:

1
$ docker logs amazing_jepsen

Note: You can use either the container ID or name as a parameter.

The output confirms it’s working as intended:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world

Lastly for this, stopping and ending a container is achieved by using:

1
$ docker stop amazing_jepsen

Autocomplete works for this last command too when entering the name.

To start the container again simply replace stop in the last command with start . Or to delete the container entirely use:

1
$ docker rm amazing_jepsen

Note: Removing a container is permanent, once it’s deleted it’s gone.

So the container from before is now gone. The Ubuntu image we have been using is still present on the system however.

To see images that have been downloaded and still present enter:

1
$ docker images

Which will give a similar output to:

Output
1
2
3
4
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
ubuntu 14.04 89d5d8e8bafb 13 days ago 187.9 MB
hello-world latest 0a6ba66e537a 9 weeks ago 960 B
docker/whalesay latest ded5e192a685 7 months ago 247 MB

To delete and remove images use this command with the image name as a parameter e.g.

1
$ docker rmi ubuntu:14.04

Autocomplete works for the image names and multiple names can be passed as parameters.

A successful deletion gives output similar to:

Output
1
2
3
4
5
Untagged: ubuntu:14.04
Deleted: 89d5d8e8bafb6e279fa70ea444260fa61cc7c5c7d93eff51002005c54a49c918
Deleted: e24428725dd6f8e354a0c6080570f90d40e9e963c6878144291c6ba9fd39b25f
Deleted: 1796d1c62d0c3bad665cc4fbe4b6a051e26c22f14aa5e0e2490e528783764ca0
Deleted: 0bf0561619131d3dc0432a2b40a9438bd48f4a84e89ff128cc5147a089c114e4

If an error message is returned claiming another container is using the image as a reference and you know this is not true.

Try using the -f force option with the rmi removal command:

1
$ docker rmi -f ubuntu:14.04

With this we complete the cycle of downloading, creating, and removing an image and its containers.


Docker is an amazing step forward in the world of virtualisation and resource efficiency, but these examples here are for demonstration purposes only, and are meant to teach the basics. As much as possible try to to think of how this concept could benefit a large hosting provider, or business that serves web applications to many.

Post number two on Docker goes into more detail on administering and handling containers. With some examples of real services and apps running under Docker.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z