Tricks of the Trades

How to Install and Get Started with Vagrant in 2017

Vagrant Logo

Preamble

Despite its age and familiarity to most nowadays I couldn’t find a straight forward post on how to install and get started using Vagrant. So here’s my notes on doing so in blog post format. Be aware that this is well trodden ground and the Vagrant documentation on their website has a similar set of steps and content. The official site, if not this will get you where you need to be when it comes to getting started with Vagrant.

Official Vagrant Website - Getting Started


1 – Install VirtualBox

Our provider choice will be VirtualBox. The provider describes the software in charge of creating then managing the virtual machines comissioned by Vagrant. The two major providers are VirtualBox and VMware, VirtualBox is free and open source, whereas VMware is not.

Find the correct installation procedure for your flavour of Linux here.

On Ubuntu you would add this line to the bottom of your sources.list file:

deb http://download.virtualbox.org/virtualbox/debian xenial contrib

Replacing xenial for your own distributions release codename.

1
$ sudo vim /etc/apt/sources.list

You can find this codename if you don’t already know it by running this command; back on the prompt.

1
lsb_release -a

For Debian 8 (“Jessie”) and Ubuntu 16.04 (“Xenial”) or later distributions. Download and add the repositories PGP key.

1
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -

Update the apt-get package database and install the virtualbox packages.

1
2
$ sudo apt-get update
$ sudo apt-get install virtualbox

VirtualBox is now installed and ready to use.


2 – Install Vagrant

Find the correct binary for your version of Linux, then download it with the URL and Wget.

Here’s the wget command and correct URL for downloading the latest version of Vagrant on Debian (at the time of writing this) - yours may differ.

1
$ wget https://releases.hashicorp.com/vagrant/1.9.4/vagrant_1.9.4_x86_64.deb ~

To then install the binary as a package on the system, use:

1
$ sudo dpkg -i vagrant_1.9.4_x86_64.deb

You can remove the Vagrant .deb build file from your user’s home directory now, after it’s been installed.


3 – Download and Use a Vagrant Box

Make a temporary test directory, and change into it, before continuing.

1
$ mkdir ~/vagrant-test && cd vagrant-test

To test the install, you can download and run a basic Vagrant box as a VM by running the next set of commands.

So we’re clear, here’s a good definition of a what a Vagrant “box” actually is:

“A package containing a representation of a virtual machine running a specific operating system. To be more simple, it is a base image of any Operating System or Kernel. It may be for a specific Provider.”

The box is the image, and from this image a virtual machine (VM) is created on the localhost.

The basic Vagrant configuration for this VM will be based in one file, the Vagrantfile.

This file is placed in the ~/vagrant-test directory via:

1
$ vagrant init ubuntu/xenial64

There are a wide variety of different box types (various OS images) listed on Hashi corp’s Atlas index.

After issuing the next command Vagrant will start to download the box and attempt to create and run a VM through VirtualBox.

1
$ vagrant up

Here’s an example of what the progress output looks like for this:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
==> default: Box 'ubuntu/xenial64' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Loading metadata for box 'ubuntu/xenial64'
default: URL: https://atlas.hashicorp.com/ubuntu/xenial64
==> default: Adding box 'ubuntu/xenial64' (v2017.05.01) for provider: virtualbox
default: Downloading: https://vagrantcloud.com/ogarcia/boxes/archlinux-x32/versions/2017.05.01/providers/virtualbox.box
==> default: Successfully added box 'ubuntu/xenial64' (v2017.05.01) for 'virtualbox'!
==> default: Importing base box 'ubuntu/xenial64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/xenial64' is up to date...
==> default: Setting the name of the VM: vagrant-testing_default_1494195673719_66642
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 5.1.22 r115126
default: VirtualBox Version: 5.0
==> default: Mounting shared folders...
default: /vagrant => /home/scarlz/vagrant-testing

You can get an error message here relating to CPU architecture if you use a box that isn’t intended for your host’s operating system.

For example, the first image here requires a 64-bit host operating system, and then the second is for a 32-bit version. The “host” here refers to the machine you installed Vagrant on.

In my example we used the first box, a 64-bit system.

Also if you are running Vagrant itself in a virtual machine (using a hypervisor). Then you’ll need to ensure your hypervisor has “VT-x/AMD-V enabled”.

To enable this you’ll have to do something along the lines of:

  1. Power off the host virtual machine.
  2. Edit the individual virtual machine’s settings.
  3. Go to the CPU/processors section.
  4. Enable “VT-x/AMD-V” / “Virtualise Intel VR-x/EPT and AMD-V/RVI”
  5. Then power on the virtual machine again.
  6. Re-run vagrant up in your Vagrant testing directory.

Here is what the setting looks like when using VMware Workstation as your hypervisor.

Vmware CPU Section Image


4 – Connect to a Running VM

Once a box is installed and configured to run in a VM (like in step 2), you connect to the VM through an SSH tunnel created by Vagrant.

To connect to the newly running VM with Vagrant use:

1
$ vagrant ssh

The prompt now shows you are connected to your new VM!

prompt example
1
ubuntu@ubuntu-xenial:~$

Type exit or use CTRL + D to leave the VM’s command line and return to your host.


5 – Vagrant Sub-commands

These are the commands you’ll find yourself using when working with Vagrant. They use subsets of subcommands - which may seem confusing at first glance. The first is box and has several susbets. Not all however have them.

box

List all the boxes you currently have installed on the host.

1
$ vagrant box list

Remove an already existing box from Vagrant.

1
$ vagrant box remove ubuntu/xenial64

Check updates for all box images on your system.

1
$ vagrant box update

Many of these commands can have the box named appended to them. In order to single them out.

destroy

The Vagrant documentation sums this command up pretty well:

“This command stops the running machine Vagrant is managing and destroys all resources that were created during the machine creation process. After running this command, your computer should be left at a clean state, as if you never created the guest machine in the first place.”

Use it to destroy your created virual machines e.g.

1
$ vagrant destroy ubuntu/xenial64

halt

This command shuts down the running virtual machine Vagrant is currently managing; you can add a machine name/ID to target specific VM’s

1
$ vagrant halt

reload

This is the same as a vagrant halt but restarts the VM after halting - like with vagrant up.

1
$ vagrant reload

port

This allows you to list all the Vagrant guest ports that are mapped to the host ports.

1
$ vagrant port

ssh_config

Useful for displaying the output of the Vagrant host side SSH configuration file.

1
$ vagrant ssh-config

Returns:

Example Output
1
2
3
4
5
6
7
8
9
10
Host default
HostName 127.0.0.1
User ubuntu
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/scarlz/vagrant-ubuntu-test/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL

6 – Miscellaneous

Should there ever be any SSH connection issues to a VM. The connection log can be seen by appending --debug to the command.

1
$ vagrant ssh --debug

Note: This --debug flag can be added onto most Vagrant commands to see the internal operations being carried out.

Checking the status of the current Vagrant virtual machine is possible by entering:

1
$ vagrant status

A global version also exists.

1
$ vagrant global-status

Adding the --prune flag updates the cache for this - thereby removing any old, dead entries from the output.

1
$ vagrant global-status --prune

Looking back to the Vagrantfile configuration. We can see that there are different options on offer to configure the resultant VM(s).

One to highlight is the VM name that is assigned to both the provider (VirtualBox) and internal Vagrant machine “name”.

This is the code to explicitly define it in both instances, if you ever want to:

Vagrantfile
1
2
3
4
5
6
7
8
9
10
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

config.vm.define "ubuntu_test_vm" do |vmname|
end

config.vm.provider :virtualbox do |vb|
vb.name = "ubuntu_test_vm"
end

end

Line 3 determines the "name" listed when issuing: vagrant global-status

Whilst line 5/6 ensures VirtualBox names and displays the VM properly in its GUI.

VirtualBox Ubuntu VM Image

How the Vagrantfile works in terms of configuration is described in detail here.


7 – Autocompletion

A nice addition to Vagrant is shell auto completion (Bash shell) for when typing in the above commands. An up to date (at the time of writing this) repo which provides this is located here:

https://github.com/brbsix/vagrant-bash-completion

This is a fork of Kura’s old repo; thanks go to him for maintaining this up until now. Here’s the provided “easiest” method of downloading this functionality to your Linux/Unix host system.

wget the script file in the above repo.

1
$ wget -q https://raw.github.com/brbsix/vagrant-bash-completion/master/vagrant-bash-completion/etc/bash_completion.d/vagrant

Add the newly downloaded file to the system Bash completion directory - whilst modifying the file’s permissions.

1
$ sudo install -m 0644 vagrant /etc/bash_completion.d/

Now either close and re-open your terminal, or source in the new /etc/bash_completion.d/vagrant bash completion file. To get the new auto-completion working.


8 – Further Reading

Erika Heidi has recently revisited and updated her great in-depth book dedicated to Vagrant. For a full run down of Vagrant and how to add configuration management tools into the mix. I’d highly recommend this book.

Cookbook - Frontcover

https://leanpub.com/vagrantcookbook

There’s an accompanying blog post from February 2017 that she’s put together on recent Vagrant usage and trends. It’s quite short and worth reading if you’re interested.

http://www.erikaheidi.com/blog/vagrant-usage-research-2017/

The infographic (which I’ll leave here) is the main takeaway from the post:

The State of Vagrant Infographic


Vagrant is a slightly ageing software in the sense that many prefer more recent tools like Docker. It does however still have its uses and is quite well adopted these days, so it’s more than worth understanding at least the basics.

Enjoy your time with Vagrant.


More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z

Docker - Building Images and Docker Hub (5)

Docker Logo

Preamble

Docker images can be thought of as blueprints and house the software or files required to run your application inside of a container. So far in these Docker posts all container images have been pulled from an online source and no real interaction with the images themselves has been explored.

However in this post we’re taking a very simple Python Flask application and going through the process of dockerising it. Which in non-jargon terms means we are configuring and creating our own custom Docker image, to then run it in a container like any other image. This usually also involves uploading it to Docker Hub for others to pull down and use, so is covered in the guide.

The Docker - Data Volumes and Data Containers (4) post that comes before this one is mostly unrelated so not really a requirement for this post, but still worth checking out overall.


1 – Clone the Repository

The example application used in this post is named “Flaskr” and serves as a very simple messaging board. It allows a user to sign in/out, add new written entries to the message board displayed, and does all this using SQLite as the database backend.

Clone this example application and its code locally.

1
$ git clone https://github.com/5car1z/docker-flaskr.git ~/docker-flaskr

Change your working directory to the new repository.

1
$ cd ~/docker-flaskr

Take a quick glance at the files using:

1
$ ls

Which returns:

Output
1
flaskr.py  flaskr_settings  README  requirements.txt  schema.sql  static  templates  test_flaskr.py

Then move onto the next step.


2 – Configure the Application

Most Flask or Python projects contain a file that holds circumstantial configuration values. These settings differ from user to user and when running the application in development/production environments. To run our Flaskr application and build a successful Docker image later on, we must set the values in this file beforehand.

Open the flaskr_settings file with your preferred text editor.

1
$ vim flaskr_settings

The first two lines containing the database file location and debug status should remain as they are. There is no need to change these for this scenario.

flaskr_settings
1
2
3
# configuration
DATABASE = 'flaskr.db'
DEBUG = False

Generating a secret key for the third line here is easiest using a Python console.

flaskr_settings
1
SECRET_KEY = ''

Back on the command line outside of the editor run:

1
$ python

Import the OS module.

1
>>> import os

Run the associated OS function for generating a string of random bytes (urandom).

1
>>> os.urandom(24)

A 24 byte value is returned as output for use as the secret key in the flaskr_settings file. The key value shown here is for demonstration purposes only.

Example Key Output
1
'\xebqD\x0f\xf3\xcf\xaa\x9e]%\x86\xd7\x11h\x8f\xa3\xa6\xbb=\xf7m\xf2{\xfd'

Copy your own secret key value into the third line of the flaskr_settings file - you can exit the Python console by pressing CTRL + D once the key has been retained.

1
$ vim flaskr_settings
flaskr_settings
1
SECRET_KEY = '\xebqD\x0f\xf3\xcf\xaa\x9e]%\x86\xd7\x11h\x8f\xa3\xa6\xbb=\xf7m\xf2{\xfd'

Note: Only one set of enclosing apostrophes are required: ''

On the last two lines of the configuration file provide a username and password. These details are used for authentication when logging into the app after it is up and running.

Add in your own values.

flaskr_settings
1
2
USERNAME = 'username'
PASSWORD = 'password'

Save your changes to the flaskr_settings file before continuing, and exit the file.

My example entries and file look like this when completed:

flaskr_settings
1
2
3
4
5
6
# configuration
DATABASE = 'flaskr.db'
DEBUG = False
SECRET_KEY = '\xebqD\x0f\xf3\xcf\xaa\x9e]%\x86\xd7\x11h\x8f\xa3\xa6\xbb=\xf7m\xf2{\xfd'
USERNAME = 'scarlz'
PASSWORD = 'password'

3 – Create the Dockerfile

The build process and configuration parameters for our eventual Docker image get defined in a new file named the “Dockerfile”.

Create the new Dockerfile using your text editor again, and place each of the upcoming actions on its own separate line.

1
$ vim Dockerfile

Tell Docker to use the official Python 2.7 image as a base for our own custom image, on the first line.

Dockerfile
1
FROM python:2.7

Define an environment variable that tells Flaskr the name of the configuration file we completed earlier.

Dockerfile
1
ENV FLASKR_SETTINGS flaskr_settings

Add the requirements.txt file to the file-system of the image we are creating.

Dockerfile
1
ADD requirements.txt /tmp/requirements.txt

Install the Flaskr application dependencies onto this image - sourced in from the “requirements” file.

Dockerfile
1
RUN pip install -r /tmp/requirements.txt

Add the current working directory . of the project and its contents to a new directory on the image’s file-system.

Dockerfile
1
ADD . /flask-application

Set the image’s file-system current working directory to the one we are creating.

Dockerfile
1
WORKDIR /flask-application

Open port 5000 on the container so we can map it to a host port later.

Dockerfile
1
EXPOSE 5000

Run the Flaskr app on this image, once the container is launched by the user.

Dockerfile
1
CMD ["python", "flaskr.py", "--host", "0.0.0.0", "--port", "5000"]

The Dockerfile in full:

Dockerfile
1
2
3
4
5
6
7
8
FROM python:2.7
ENV FLASKR_SETTINGS flaskr_settings
ADD requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt
ADD . /flaskr-application
WORKDIR /flaskr-application
EXPOSE 5000
CMD ["python", "flaskr.py", "--host", "0.0.0.0", "--port", "5000"]

Make sure your own file’s contents matches the above, and then save the changes.


4 – Build and Run the Image

Using Docker (which you should already have installed) we’re going to build the custom Flaskr image configured in the previous step.

When entering this next command be aware that ideally the parameters of -t need to be replaced with your own username and preferred image name. The details are used later on when registering the image externally.

1
$ docker build --no-cache -t scarlz/flaskr-application .

Note: The -t option assigns a tag to the image used by Docker Hub or an image registry service.

Give the build process a few minutes to download and carry out the necessary operations, noting its progress via the output.

If the Dockerfile was configured properly in the previous step you’ll get a final output similar to:

Output
1
Successfully built c4e546ed282d

Run the newly built Docker image in a daemonised container, mapping the internal container port 5000 to the host port 32775 so we can view the app locally.

1
$ docker run --name flaskr-container -p 32775:5000 -d scarlz/flaskr-application

Confirm the container has run and is running.

1
$ docker ps

Running container details are returned:

1
2
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                     NAMES
c3937994e66b scarlz/flaskr-application "python flaskr.py --h" 3 seconds ago Up 3 seconds 0.0.0.0:5000->32775/tcp flaskr-container

Preview the Flaskr app in your web browser by visiting:

http://0.0.0.0:32775

Flaskr Homepage Image

Log in to the application with the authentication details from step 2 if you wish to test it out.

Flaskr Internal Form Image


5 – Push the Image to Docker Hub

Once images have been built and tested successfully you may want to make them accessible to others through a public or private Docker registry service. The official registry service open to all provided by Docker is known as “Docker Hub”.

Create a free account by registering with the service at the previous link and then go back to the command line and login using:

1
$ docker login

Enter the authentication details you used to sign up to the service as prompted.

Output
1
2
3
4
Username: 
Password:
Email:
Login Succeeded

Now once successfully authenticated push the image you created earlier to Docker Hub by providing the “tag” name assigned to it.

1
$ docker push scarlz/flaskr-application:latest

The :latest suffix ensures the most recently built version of the image is sent.

Successfully pushing the image will return an output akin to:

1
77e39ee82117: Image successfully pushed

You can see the example image I pushed to Docker Hub at:

https://hub.docker.com/r/5car1z/flaskr-application

Docker Hub - Flaskr


There are many more practices and nuances not covered here when it comes to building, tagging, and pushing to Docker registries. But hopefully this serves as a simple example of how the process can be carried out. Something to bear in mind perhaps is that Docker Hub has in the past been given bad press in terms of performance (most notably speed) although the service continues to improve as time goes on. it is also for these reasons or similar many use third party private registry platforms in its place e.g. Portus

The next and final post in this series takes a glance briefly at some of the extra platforms/toolsets that form up more of the Docker eco-system.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z

Docker - Data Volumes and Data Containers (4)

Docker Logo

Preamble

This blog post is becoming more and more outdated as time goes on, it would be better to consult the official Docker documentation for this kind of thing!

Docker containers are a lot more scalable and modular once they have the links in place that allow them to share data. How these links are created and arranged depends upon the arranger, who will choose either to create a file-system data volume or a dedicated data volume container.

This post works through these two common choices; data volumes and data volume containers. With consideration of the commands involved in backing up, restoring, and migrating said data volumes.

This is post four on Docker following on from Docker - Daemon Administration and Networking (3). Go back and read the latter half of that post to see how to network containers together so they can properly communicate back and forth - if you need to.


1 – Creating Data Volumes

A “data volume” is a marked directory inside of a container that exists to hold persistent or commonly shared data. Assigning these volumes is done when creating a new container.

Any data already present as part of the Docker image in a targeted volume directory is carried forward into the new container and not lost. This however is not true when mounting a local host directory (covered later) as the data is temporarily covered by the new volume.

You can add a data volume to a container using the -v flag in conjunction with the create or run command. You can use the -v multiple times to mount multiple data volumes.

This next command will create a data volume inside a new container in the /webapp directory.

1
$ docker run -d -P --name test-container -v /webapp training/webapp python app.py

Data volumes are very useful as once designated and created they can be shared and included as part of other containers. It’s also important to note that any changes to data volumes are not included when you update an image, but conversely data volumes will persist even if the container itself is deleted.

Note: The VOLUME instruction in a Dockerfile will add one or more new volumes to any containers created from the image.

This preservation is due to the fact that data volumes are meant to persist independent of a container’s life cycle. In turn this also means Docker never garbage collects volumes that are no longer in use by a container.


2 – Creating Host Data Volumes

You can instead mount a directory from your Docker daemon’s host into a container; you may have seen this used once or twice in the previous posts.

Mounting a host directory can be useful for testing. For example, you can mount source code inside a container. Then, change the source code and see its effect on the application in real time. The directory on the host must be specified as an absolute path and if the directory doesn’t exist Docker will automatically create it for you.

The next example command mounts the host directory /src/webapp into the container at the /opt/webapp directory.

1
$ docker run -d -P --name test-container -v /src/webapp:/opt/webapp training/webapp python app.py

Some internal rules and behaviours for this process are:

  • The targeted container directory must always take an absolute full file-system path.

  • The host source directory can be either an absolute path or a name value.

  • If the targeted container path already exists inside the container’s image, the host directory mount overlays but does not remove the destination content. Once the mount is removed, the destination content is accessible again.

Docker volumes default to mounting as both a dual read-write mode, but you can set them to mount as read-only if you like.

Here the same /src/webapp directory is linked again but the extra :ro option makes the mount read-only.

1
$ docker run -d -P --name web -v /src/webapp:/opt/webapp:ro training/webapp python app.py

Note: It’s not possible to mount a host directory using a Dockerfile because by convention images should be portable and flexible, and a specific host directory might not be available on all potential hosts.


3 – Mounting Individual Host Files

The -v flag used so far can target a single file instead of entire directories from the host machine. This is done by mapping the specific file on each side of the container.

A great interactive example of this that creates a new container and drops you into a bash shell with your bash history from the host, is as follows:

1
$ docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash

Furthermore when you exit the container, the host version of the file will have the the commands typed from the inside of the container - written to the the .bash_history file.


4 – Creating Dedicated Data Volume Containers

A popular practice with Docker data sharing is to create a dedicated container that holds all of your persistent shareable data resources, mounting the data inside of it into other containers once created and setup.

This example taken from the Docker documentation uses the postgres SQL training image as a base for the data volume container.

1
$ docker create -v /data-store --name data-store training/postgres /bin/true

Note: /bin/true - returns a 0 and does nothing if the command was successful.

The --volumes-from flag is then used to mount the /data-store volume inside of other containers:

1
$ docker run -d --volumes-from data-store --name database-container-1 training/postgres

This process is repeated for additional new containers:

1
$ docker run -d --volumes-from data-store --name database-container-2  training/postgres

Be aware that you can use multiple --volumes-from flags in one command to combine data volumes from multiple other dedicated data containers.

An alternative idea is to mount the volumes from each subsequent container to the next, instead of the original dedicated container linking to new ones.

This forms a chain that would begin by using:

1
$ docker run -d --name database-container-3 --volumes-from database-container-2  training/postgres

Remember that If you remove containers that mount volumes, the volume store and its data will not be deleted. Docker preserves it.

To fully delete a volume from the file-system you must run:

1
$ docker rm -v <container name>

Where <container name> is “the last container with a reference to the volume.”

Note: There is no cautionary Docker warning provided when removing a container without the -v option. So if a container has volumes mounted the -v must be passed to fully remov them.

Dangling Volumes

“Dangling volumes” refers to container volumes that are no longer referenced by a container.

Fortunately there is a command to list out all the stray volumes on a system.

1
$ docker volume ls -f dangling=true

To remove a volume that’s no longer needed use:

1
$ docker volume rm <volume name>

Where <volume name> is substituted for the dangling volume name shown in the previous ls output.


5 – Backing Up and Restoring Data Volumes

How are data volumes maintained when it comes to things like backups, restoration, and migration? Well here is one solution that takes care of these necessities by showing how you can achieve this with a dedicated data container.

To backup a volume:

1
$ docker run --rm --volumes-from data-container -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /data-store

Here’s how the previous command works:

  1. The --volumes-from flag creates a new nameless container that mounts the data volume inside data-container you wish to backup.
  2. A localhost directory is mounted as /backup . Then tar archives the contents of the /data-store volume to a backup.tar file inside the local /backup directory.
  3. The container will be --rm removed once it eventually ends and exits.

We are left with a backup of the /data-store volume on the localhost.

From here you could restore the volume in whatever way you wish.

To restore into a new container run:

1
$ docker run -v /data-store --name data-container-2 ubuntu /bin/bash

Then extract the backup file contents into the the new container’s data volume:

1
$ docker run --rm --volumes-from data-container-2 -v $(pwd):/backup ubuntu bash -c "cd /data-store && tar -xvf /backup/backup.tar"

Now the new container is up and running with the files from the original /data-store volume.


6 – Volume and Data Container Issues

  • Orphan Volumes – Referred to as dangling volumes earlier on. These are the leftover untracked volumes that aren’t removed from the system once a container is removed/deleted.

  • Security – Other than the usual Unix file permissions and the ability to set read-only or read-write privileges. Docker volumes or data containers have no additional security placed on them.

  • Data Integrity – Sharing data using volumes and data containers provides no level of data integrity protection. Data protection features are not yet built into Docker i.e. data snapshot, automatic data replication, automatic backups, etc. So data management has to be handled by the administrator or the container itself.

  • External Storage – The current design does not take into account the ability to use a Docker volume spanning from one host to another. They must be on the same host.


It seems like a large amount of information has been covered here but really only two ideas have been explored. That of singular data volumes and that of the preferred independent data container. There are also new updates to Docker on the horizon as always so some of the issues raised here are hopefully soon to be resolved. The next post on Docker covers building images using Dockerfiles, and likewise with Docker Compose.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z

Docker - Daemon Administration and Networking (3)

Docker Logo Image

Preamble

This time we are beginning by centering around the Docker daemon and how it interacts with various process mangers from different platforms. Followed up by an introduction to networking in Docker that uses more of the Docker training images to link together and create a basic network of containers. Specifically a PostgreSQL database container and a Python webapp container.

This is post three on Docker following on from Docker - Administration and Container Applications (2). If you’re looking for more generalised administration and basic example uses of the Docker Engine CLI then you may want to read that post instead.


1 – Docker Daemon Administration

The Docker daemon is the background service that handles running containers and all their states.

The starting and stopping of the Docker daemon is often configured through a process manager like systemd or Upstart. In a production environment this is very useful as you have a lot of customisable control over the behaviour of the daemon.

It can be run directly from the command line though instead of this:

1
$ docker daemon

It listens on the Unix socket - unix:///var/run/docker.sock when active and running.

If you’re running the docker daemon directly like this you can append configuration options to the command.

An example of running the docker daemon with configuration options is as follows:

1
$ docker daemon -D --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376
  • -D --debug=false – Enable or disable debug mode.
  • --tls=false – Enable or disable TLS.
  • --tlscert= – certificate location.
  • tlskey= – key location.
  • -H --host=[] – Daemon socket(s) to connect to.

More options are on offer for the Docker daemon at the link before the last code block.

Upstart

The default Docker daemon Upstart job is found in /etc/init/docker.conf .

To check the status of the daemon:

1
$ sudo status docker

To start the Docker daemon:

1
$ sudo start docker

Stop the Docker daemon:

1
$ sudo stop docker

Or restart the daemon:

1
$ sudo restart docker

Logs for Upstart jobs are found in /var/log/upstart and are compressed when the daemon is not running. So run the daemon/container to read the active log file - docker.log via:

1
$ sudo tail -fn 15 /var/log/upstart/docker.log

systemd

Default unit files are stored in the subdirectories of /usr/lib/systemd and /lib/systemd/system . Custom user created unit files are kept in /etc/systemd/system .

To check the status of the daemon:

1
$ sudo systemctl status docker

To start the Docker daemon:

1
$ sudo systemctl start docker

Stop the Docker daemon:

1
$ sudo systemctl stop docker

Or restart the daemon:

1
$ sudo systemctl restart docker

To ensure the Docker daemon starts at boot:

1
$ sudo systemctl enable docker

Logs for Docker are viewed in systemd with:

1
$ journalctl -u docker

A more in-depth look at systemd and Docker is kept here in the Docker docs:

Docker Documentation - systemd


2 – Process Manager Container Automation

Restart policies are an in-built Docker mechanism for restarting containers automatically when they exit. These must be set manually with the flag - --restart="yes" and are also triggered when the Docker daemon starts up (like after a system reboot). Restart policies start linked containers in the correct order too.

If you have non-Docker processes that depend on Docker containers you can use a process manager like upstart, systemd or supervisor instead of these restart policies to replace this functionality.

This is what we will cover in this step.

Note: Be aware that process mangers will conflict with Docker restart policies if they are both in action So don’t run restart policies if you are using a process manager.

For these examples assume that the container’s for each have already been created and are running Ghost with the name --name=ghost-container .

Upstart

/etc/init/ghost.conf
1
2
3
4
5
6
7
8
description "Ghost Blogging Container"
author "Scarlz"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
/usr/bin/docker start -a ghost-container
end script

Docker automatically attaches the process manager to the running container, or starts it if needed with this setup.

All signals from Docker are also forwarded so that the process manager can detect when a container stops, to correctly restart it.

If you need to pass options to the containers (such as --env) then you’ll need to use docker run rather than docker start in the job configuration.

For Example:

/etc/init/ghost.conf
1
2
3
script
/usr/bin/docker run --env foo=bar --name ghost-container ghost
end script

This differs as it creates a new container using the ghost image every time the service is started and takes into account the extra options.

systemd

/etc/systemd/system/ghost
1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Ghost Blogging Container
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStart=/usr/bin/docker start -a ghost-container
ExecStop=/usr/bin/docker stop -t 2 ghost-container

[Install]
WantedBy=local.target

Docker automatically attaches the process manager to the running container, or starts it if needed with this setup.

All signals from Docker are also forwarded so that the process manager can detect when a container stops, to correctly restart it.

If you need to pass options to the containers (such as --env), then you’ll need to use docker run rather than docker start in the job configuration.

For Example:

/etc/systemd/system/ghost
1
2
ExecStart=/usr/bin/docker run --env foo=bar --name ghost-container ghost
ExecStop=/usr/bin/docker stop -t 2 ghost-container ; /usr/bin/docker rm -f ghost-container

This differs as it creates a new container with the extra options every time the service is started, which stops and removes itself when the Docker service ends.


3 – Docker Networks

Network drivers allow containers to be linked together and networked. Docker comes with two default network drivers as part of the normal installation:

  • The bridge driver.
  • The overlay driver.

These two drivers are replaceable with other third party drivers that perform more optimally in different situations. But for low end basic Docker use these given defaults are fine.

Docker also automatically includes three default networks with the base install:

1
$ docker network ls

Listing them as:

Output
1
2
3
4
NETWORK ID          NAME                DRIVER
2d41f8bbf514 host host
f9ee6308ecdd bridge bridge
49dab653f349 none null

The network named bridge is classed as a special network. Docker launches any and all containers in this network (unless told otherwise).

So if you currently you have containers running these will have been placed into the bridge network group.

Networks can be inspected using the next command, where bridge is the network name to be inspected:

1
$ docker network inspect bridge

The output shows any and all configured directives for the network:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[
{
"Name": "bridge",
"Id": "f9ee6308ecdd5dc5a588428469de1b7c475fdafdab49cfc33c1c3ac0bf0559ab",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.0/16"
}
]
},
"Containers": {
"ff98b5ed01dd4323f0ce38af9b8cea2d49d0b1e194cf147a3a8f632278a11451": {
"EndpointID": "b7c9fabcda00ccebd6523f76477b51eba00dd5d3f26940355139fff62d5576bb",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
}
]

This inspect output changes as a network is altered and configured, how to do this is covered in later steps.


4 – Creating Docker Networks

Networks are natural ways to isolate containers from other containers or other networks. The original default networks are not to be solely relied upon however. It’s better to create your own network groups.

Remember there are two default drivers and therefore two native network types; bridge and overlay . Bridge networks can only make use of one singular host to run the Docker Engine software. An overlay network differs in that it can incorporate multiple hosts into running the Docker software.

To make the simpler “bridge” type network we use the create option:

1
$ docker network create -d bridge <new-network-name>

With this last command the -d (driver) and bridge option specifies the network type we want to create. With a new name for the network at the end of the command.

To see the new network after creation:

1
$ docker network ls

Shown on the last line:

Output
1
2
3
4
5
NETWORK ID          NAME                  DRIVER
f9ee6308ecdd bridge bridge
49dab653f349 none null
2d41f8bbf514 host host
08f44ef7de28 test-bridge-network bridge

Overlay networks are a much wider topic due to their inclusion of multiple hosts so aren’t covered in this post but the basic principles and where to start is mentioned in the link below:

Docker Documentation - Working with Network Commands


5 – Connecting Containers to Networks

Creating and using these networks allows container applications to to operate in unison and as securely as possible. Containers inside of networks can only interact with their counterparts and are isolated from the outsides of the network. Similar to VLAN segregation inside of a IP based network.

Usually containers are added to a network when you first launch and run the container. We’ll follow the example from the Docker Documentation that uses a PostgreSQL database container and the Python webapp to demonstrate a simple network configuration.

First launch a container running the PostgreSQL database training image, and in the process add it to your custom made bridge network from the previous step.

To do this we must pass the --net= flag to the new container, and provide it with the name of our custom bridge network. Which in my example earlier was test-bridge-network :

1
$ docker run -d --net=test-bridge-network --name db training/postgres

You can inspect this aptly named db container to see where exactly it is connected:

1
$ docker inspect --format='{{json .NetworkSettings.Networks}}' db

This shows us the network details for the database container’s test-bridge-network connection:

Output
1
{"test-bridge-network":{"EndpointID":"0008c8566542ef24e5e57d5911c8e33a79f0fcb91b1bbdd60d5cdec3217fb517","Gateway":"172.18.0.1","IPAddress":"172.18.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:12:00:02"}}

Next run the Python training web application in daemonised mode with out any extra options:

1
$ docker run -d --name python-webapp training/webapp python app.py

Inspect the python-webapp container’s network connection in the same way as before:

1
$ docker inspect --format='{{json .NetworkSettings.Networks}}' python-webapp

As expected this new container is running under the default bridge network, shown in the output of the last command:

Output
1
{"bridge":{"EndpointID":"e5c7f1c8d097fdafc35b89d7bce576fe01a22709424643505d79abe394a59767","Gateway":"172.17.0.1","IPAddress":"172.17.0.2","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02"}}

Docker lets us connect a container to as many networks as we like. More importantly for us we can also connect an already running container to a network.

Attach the running python-webapp container to the “test-bridge-network” like we need:

1
$ docker network connect test-bridge-network python-webapp

To test the container connections to our custom network we can ping from one to the other.

Get the IP address of the db container:

1
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' db

In my case this was:

Output
1
172.18.0.2

Now we have the IP address open an interactive shell into the python-webapp container:

1
$ docker exec -it python-webapp bash

Attempt to ping the db container with the IP address from before, substituting 172.18.0.2 for your address equivalent:

1
ping -c 10 172.18.0.2

As long as you successfully connected both containers earlier on, the ping command will be successful:

Output
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@fc0f73c129c0:/opt/webapp# ping -c 10 db
PING db (172.18.0.2) 56(84) bytes of data.
64 bytes from db (172.18.0.2): icmp_seq=1 ttl=64 time=0.216 ms
64 bytes from db (172.18.0.2): icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from db (172.18.0.2): icmp_seq=3 ttl=64 time=0.053 ms
64 bytes from db (172.18.0.2): icmp_seq=4 ttl=64 time=0.063 ms
64 bytes from db (172.18.0.2): icmp_seq=5 ttl=64 time=0.065 ms
64 bytes from db (172.18.0.2): icmp_seq=6 ttl=64 time=0.063 ms
64 bytes from db (172.18.0.2): icmp_seq=7 ttl=64 time=0.062 ms
64 bytes from db (172.18.0.2): icmp_seq=8 ttl=64 time=0.064 ms
64 bytes from db (172.18.0.2): icmp_seq=9 ttl=64 time=0.061 ms
64 bytes from db (172.18.0.2): icmp_seq=10 ttl=64 time=0.063 ms

--- db ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8997ms
rtt min/avg/max/mdev = 0.053/0.076/0.216/0.047 ms

Conveniently container names work in the place of an IP address too in this scenario:

1
ping -c 10 db

Press CTRL + D to exit the container prompt, or type in exit instead.

And with that we have two containers on the same user created network able to communicate with each other, and able to share data. Which is what we would be aiming for in the case of the PostgreSQL database and Python webapp.

There’s more ways of sharing data between containers once they are connected through a network, but these are covered in the next post of the series.


6 – Miscellaneous Networking Commands

Here are a few complimentary commands in relation to what has already been covered in this post.

At some point you are likely to need to remove a container from its network. This is done by using the disconnect command:

1
$ docker network disconnect test-bridge-network <container-name>

Here test-bridge-network is the name of the network, followed by which container you want to remove from it.

When all the containers in a network are stopped or disconnected, you can remove networks themselves completely with:

1
$ docker network rm test-bridge-network

Meaning the test-bridge-network is now deleted and absent from the list of existing networks:

Output
1
2
3
4
NETWORK ID          NAME                  DRIVER             
2e38b3a44489 bridge bridge
79d9d21edbec none null
61371e641e1b host host

The output here is garnered from the docker network ls command.


Networking in Docker begins here with these examples but goes a lot further than what we’ve covered. Data volumes, data containers, and mounting host volumes are described in the next post on Docker when it’s released.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z

Docker - Administration and Container Applications (2)

Docker Logo Image

Preamble

In this post we run a python program in a Docker container sourced from the user guide. Look at the various commands that come into play when administering containers, and then briefly setup some real world applications with Docker.

This will be the second post on Docker following on from Docker - Installing and Running (1). If you’re brand new to Docker then the first post linked helps to introduce some of its concepts and theory to better understand the utilities it can provide.


1 – Example Container Application

Pull this training image from the Docker user guide:

1
$ docker run -d -P training/webapp python app.py

The -d option tells Docker to daemonise and run the container in the background. -P maps any required network ports inside the container to your host, and the Python application inside is also executed at the end.

Run the Docker process command to see running container details:

1
$ docker ps

The “webapp” image container shows network ports that have been mapped as part of the image configuration:

Output
1
2
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
b8a16d8e94cc training/webapp "python app.py" 2 minutes ago Up 2 minutes 0.0.0.0:32768->5000/tcp nostalgic_knuth

In my example here port 5000 (the default Python Flask port) inside the container has been exposed on the host ephemeral TCP port 32768 . Ephemeral port ranges are temporary short lived port numbers which typically range anywhere from 32768 to 61000. These are dynamically used and are never set in stone.

The Docker image decides all this for us, but as an aside it’s also possible to manually sets the ports to use by a container.

This command assigns port 80 on the local host to port 5000 inside the container:

1
$ docker run -d -p 80:5000 training/webapp python app.py

It’s important never to map ports in a 1:1 fashion i.e. 5000->5000/tcp as if we needed multiple containers running the same image, the traffic will use the same host port (5000) and only be accessible one instance at a time.

If you like you can check the original Python docker container’s port is working by accessing:

http://localhost:32768 or http://your.hosts.ip.address:32768 in a browser.

Where the port number 32768 is set to your own example container’s ephemeral port.

Another way to see this example containers image’s port configuration is:

1
$ docker port <container-name>

Showing:

Output
1
32768->5000/tcp

To see the front facing host machine’s mapped ports individually add the number of the internal port to the end of the command:

1
$ docker port <container-name> 5000

Which shows:

Output
1
0.0.0.0:32768

Now we have this example container up and running we’ll go through multiple administrative commands that are important for when working with containers. These commands if you wish can be tested with the example container, or even better with multiple instances of it. Each and ever command shown may not be completely applicable however.


2 – Administrative Commands

Here’s a list of select Docker commands to refer to when playing around with or monitoring containers. There are even more to check out as this list is by no means exhaustive.

A few core commands were already mentioned in Docker - Installing and Running (1) so won’t appear here.

The first command allows you to attach to a running container interactively using the container’s ID or name:

1
$ docker attach <container-name>

You can detach again from the container and leave it running with CTRL + P or CTRL + Q for a quiet exit.

To list the changed files and directories in a container᾿s filesystem use diff:

1
$ docker diff <container-name>

Where in the output the three “event types” are tagged as either:

  • A - Add
  • D - Delete
  • C - Change

For real-time container and image activity begin a feed of event output with:

1
$ docker events

The exec command runs a command of your choosing inside a container without dropping you down into a shell inside the container.

This example creates a container named ubuntu_bash and starts a Bash session that runs the touch command:

1
$ docker exec -d ubuntu_bash touch /tmp/execWorks

Backing up a containers internal file-system as a tar archive is carried out using the “export“ command:

1
$ docker export <container-name> > backup-archive.tar

Show the internal history of an image with human readable -H values:

1
$ docker history -H <image-name>

To display system wide Docker info and statistics use:

1
$ docker -D info

Return low-level information on a container or image using inspect:

1
$ docker inspect

You can filter with the inspect command by adding the parameters described on the previously linked page.

Use SIGKILL to kill a running container, caution as usual is advised with this:

1
$ docker kill <container-name>

Pause and unpause all running processes in a Docker container:

1
2
$ docker pause
$ docker unpause

If the auto-generated names are not to your taste rename containers like this:

1
$ docker rename <container-name> <new-name>

Alternatively when first creating/running a container --name sets the name from the onset:

1
$ docker run --name <container-name> -d <image-name>

Enter a real-time live feed of one or more containers resource usage stats:

1
$ docker stats <container-name>

Docker has its own top command for containers, to see the running processes inside:

1
$ docker top <container-name>

That’s all for these. Some real world examples of running images from the official Docker Hub repositories are now covered briefly to serve as realistic examples for how you might want to use Docker and its containerisation.

Be mindful that these are not walk-throughs on fully setting up each service, but general starting points for each.


3 – Ghost Image Container

“Ghost is a free and open source blogging platform written in JavaScript.”

To pull the image itself:

1
$ docker pull ghost

To run a basic Ghost instance named ghost-blog-name on the mapped port 2368 use:

1
$ docker run --name <container-name> -p 8080:2368 -d ghost

Then access the blog via http://localhost:8080 or http://your.hosts.ip.address:8080 in a browser.

Ghost Default Blog Image

The image can also be pointed to existing Ghost content on your local host:

1
$ docker run --name <container-name> -v /path/to/ghost/blog:/var/lib/ghost ghost

Docker Hub - Ghost


4 – irssi Image Container

“irssi is a terminal based IRC client for UNIX systems.”

I’m not sure about the benefits of running your irssi client through Docker but to serve as another example we’ll go through the Docker Hub provided setup process:

Create an interactive shell session in a new container named whatever you choose whilst setting an environment variable named TERM that is retrieved from the host. The user ID is set with -u and group ID is set with the -g option:

1
$ docker run -it --name <container-name> -e TERM -u $(id -u):$(id -g) \

Then stop the log driver to avoid storing “useless interactive terminal data”:

1
> --log-driver=none \

Mount and bind the hosts /.irssi config home directory to the internal container equivalent:

1
> -v $HOME/.irssi:/home/user/.irssi:ro \

Mount and bind the hosts /localtime directory to the internal container equivalent:

1
> -v /etc/localtime:/etc/localtime:ro \

Pull down and apply all the previous commands to the irssi image from Docker Hub:

1
> irssi

As everyone who uses irssi has their own configuration for the program this image does not come with any provided pre-sets. So you have to set this up yourself. Other than this you are dropped into the irssi session within the new container.

irssi Containerised Image

Docker Hub - irssi


5 – MongoDB Image Container

“MongoDB document databases provide high availability and easy scalability.”

The standard command to pull the image and container is one we’re familiar with by now:

1
$ docker run --name <mongo-container-name> -d mongo

This image is configured to expose port 27017 (Mongo’s default port), so linking other containers to it will make it automatically available.

In brief this is how to link a new container to a Mongo container named mongo-container-name. The image at the end is the application/service the new container will run:

1
$ docker run --name <new-container-name> --link <mongo-container-name>:mongo -d <image-name>

Using inspect with grep shows the link:

1
$ docker inspect nginx-container | grep -i -A1 "links"

With the output in my case being:

Output
1
2
"Links": [
"/mongo-container:/nginx-container/mongo"

Docker Hub - MongoDB


6 – NGINX Image Container

“Nginx (pronounced “engine-x”) is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server).”

As usual like with all these images to download/pull:

1
$ docker pull nginx

A basic example is given of some static HTML content served from a directory (~/static-content-dir) that has been mounted onto the NGINX hosting directory within the new container:

1
$ docker run --name <container-name> -v ~/static-content-dir:/usr/share/nginx/html:ro -P -d nginx

Whichever port is auto-assigned to the NGINX container can be used to access the static HTML content.

Find out the port number using either docker ps or:

1
$ docker port <container-name>

For our purpose here we want the second line’s port which in my case is 327773 - as shown:

Output
1
2
443/tcp -> 0.0.0.0:32772
80/tcp -> 0.0.0.0:32773

http://localhost:32773 or http://your.hosts.ip.address:32773 in a browser on the localhost now returns:

32773 Port Image

The same idea but with a Dockerfile is better, one that is located in the directory containing our static HTML content:

1
$ vim ~/static-content-dir/Dockerfile

Type in:

~/static-content-dir/Dockerfile
1
2
FROM nginx
COPY . /usr/share/nginx/html

Then build a new image with the Dockerfile and give it a suitable name; nginx-custom-image is what I’m using for this example:

1
$ docker build -t nginx-custom-image ~/static-content-dir/

If this is successful output in this form is given:

1
2
3
4
5
6
7
Sending build context to Docker daemon 6.372 MB
Step 1 : FROM nginx
---> 5328fdfe9b8e
Step 2 : COPY . /usr/share/nginx/html
---> a4bf297e4dcc
Removing intermediate container 7a213493723d
Successfully built a4bf297e4dcc

All that’s left is to run the custom built image, this time with a more typical, user provided port number:

1
$ docker run -it --name <container-name> -p 8080:80 -d nginx-custom-image

Again accessing http://localhost:8080 or http://your.hosts.ip.address:8080 in a browser on the localhost shows the static HTML web pages:

8080 Port Image

Docker Hub - NGINX


7 – Apache httpd (2.4) Image Container

To serve static HTML content in a directory named static-content-dir on port 32775 of the local host machine we can use:

1
$ docker run -it --name <container-name> -v ~/static-content-dir:/usr/local/apache2/htdocs/ -p 32755:80 -d httpd:2.4

Visiting http://localhost:32755 or http://your.hosts.ip.address:32755 in a browser on the localhost then returns:

Port 32755 Image

With a Dockerfile for configuration, custom setups can be applied. Create the Dockerfile in the project directory where the static content is hosted from:

1
$ vim ~/static-content-dir/Dockerfile

Add lines like the below, where line 2 copies a httpd config file from the current working directory, to the internal container’s version. And line 3 copies the entirety of the current working directory (the static HTML files) to the Apache container’s web hosting directory:

~/static-content-dir/Dockerfile
1
2
3
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
COPY . /usr/local/apache2/htdocs/

Note: If the my-httpd:2.4 configuration file is missing, the next command to build the image will fail.

Build the new custom Apache image defined in the Dockerfile and give it the name custom-apache-image which you can of course change if you like:

1
$ docker build -t custom-apache-image ~/static-content-dir/

Successful output for the image build sequence looks like this (or similar):

Output
1
2
3
4
5
6
7
Sending build context to Docker daemon 6.372 MB
Step 1 : FROM httpd:2.4
---> 1a49ac676c05
Step 2 : COPY . /usr/local/apache2/htdocs/
---> f7052ffe8190
Removing intermediate container 53311d3ac0a5
Successfully built f7052ffe8190

Lastly, start and run a new container using the custom generated image on port 32756 of the localhost machine:

1
$ docker run -it --name <container-name> -p 32756:80 -d custom-apache-image

Visiting http://localhost:32756 or http://your.hosts.ip.address:32756 in a browser on the localhost now returns:

Port 32756 Image

Docker Hub - httpd


8 – Jenkins Image Container

Create a new directory in your user’s home directory for the Jenkins config files. This will be mounted and mapped to the container’s equivalent configuration space:

1
$ mkdir ~/jenkins_home

Run the Jenkins image mapping the two internal ports to ephermal ports on the host side, whilst syncing the config directory we just created to the new container:

1
$ docker run --name <container-name> -p 32790:8080 -p 32791:50000 -v ~/jenkins_home:/var/jenkins_home -d jenkins

Jenkins can be seen at the first port number we mapped. In my example it was 32790 meaning a URL of http://localhost:32790 or http://your.hosts.ip.address:32790 in a browser takes us to the Jenkins application page:

Jenkins on Port 32790

Docker Hub - Jenkins


Remember that there are unofficial image repositories to be found on Docker Hub too, and potentially elsewhere when made available.

The third post on Docker talks a bit more about administration with Docker. As well as details based around how to network containers together.

Links to subsequent Docker posts can be found on the Trades page.

More Information

Easily deploy an SSD cloud server on Digital Ocean in 55 seconds. Sign up using my link and receive $10.00 in free credit: https://www.digitalocean.com/?refcode=e91058dbfc7b

– Scarlz: @5car1z