Docker For Sysadmins Linux Windows Vmware
Docker For Sysadmins Linux Windows Vmware
Docker For Sysadmins Linux Windows Vmware
VMware
Getting started with Docker from the perspective
of sysadmins and VM admins
Nigel Poulton
This book is for sale at https://leanpub.com/dockerforsysadmins
This version was published on 2016-09-23
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean
Publishing process. Lean Publishing is the act of publishing an in-progress ebook
using lightweight tools and many iterations to get reader feedback, pivot until you
have the right book and build traction once you do.
2016 Nigel Poulton
Huge thanks to my wife and kids for putting up with a geek in the house who
genuinely thinks hes a bunch of software running inside of a container on top of
midrange biological hardware. It cant be easy living with me!
Massive thanks as well to everyone who watches my Pluralsight videos. I love
connecting with you and really appreciate all the feedback Ive gotten over the years.
This was one of the major reasons I decided to write this book! I hope itll be an
amazing tool to help you drive your careers even further forward.
Contents
0: About the book . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Why should I read this book or care about Docker? . . . . . . . .
Isnt Docker just for developers? . . . . . . . . . . . . . . . . . .
Why this Docker book and not another one? . . . . . . . . . . . .
Should I buy the book if Ive already watched your video courses?
How the book is organized . . . . . . . . . . . . . . . . . . . . .
Other stuff about the book . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
2
2
2
3
. . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
7
8
8
9
9
9
10
10
2: Docker . . . . . . . . . . . . . . . . . . . . . .
Docker - The TLDR . . . . . . . . . . . . . .
Docker, Inc. . . . . . . . . . . . . . . . . . . .
The Docker runtime and orchestration engine
The Docker open-source project . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
11
13
14
CONTENTS
.
.
.
.
.
.
.
.
.
.
14
15
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
24
28
31
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
32
32
33
35
37
. . . . . . . . . . . . . . . .
39
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
40
40
41
58
58
6: Containers . . . . . . . . . . . . . .
Docker containers - The TLDR . .
Docker containers - The deep dive
Containers - The commands . . .
Chapter summary . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
60
60
61
80
81
7: Swarm mode . . . . . . . . . . .
Swarm mode - The TLDR . . .
Swarm mode - The deep dive .
Swarm mode - The commands
Chapter summary . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
82
82
82
105
106
8: What next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
107
.
.
.
.
.
.
.
.
.
.
CONTENTS
Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
107
the book, but the kind of stuff thats important if you want a good rounded knowledge
of Docker and containers. Its only a short section and you probably should read it.
The technical stuff is what the book is all about! This is where youll find everything
you need to start working with Docker. It gets into the detail of images, containers
and the increasingly important topic of orchestration. Youll get the theory so that
you know how it all fits together, and youll get commands and examples to show
you how it all works in practice.
Every chapter in the technical stuff section is divided into three parts:
The TLDR
The deep dive
The commands
The TLDR will give you two or three paragraphs that you could use to explain the
topic at the coffee machine.
TLDR or TL;DR, is a modern acronym meaning too long; didnt read.
Its normally used to indicate something that was too long to bother
reading. Im using it here in the book to indicate a short section that you
can read if youre in a hurry and havent got time to read the longer deep
dive that immediately follows it.
The deep dive is where well explain how everything works and go through the
examples.
The Commands lists out all of the commands youve learned in an easy to read list
with brief reminders of what each one does.
I think youll love that format.
Text wrapping
Ive tried really hard to get the commands and outputs to fit on a single line without
wrapping! So instead of getting this
$ docker service ps uber-service
ID
NAME
IMAGE
E
DESIRED STATE CURRENT STATE
ERROR
7zi85ypj7t6kjdkevreswknys uber-service.1
nigelpoulton/tu-demo:v2
172-31-12-203 Running
Running about an hour ago
\_ uber-service.1
nigelpoulton/tu-demo:v1
0v5a97xatho0dd4x5fwth87e5
172-31-12-207 Shutdown
Shutdown about an hour ago
31xx0df6je8aqmkjqn8w1q9cf uber-service.2
nigelpoulton/tu-demo:v2
172-31-12-203 Running
Running about an hour ago
NOD\
ip-\
ip-\
ip-\
NODE
mgr2
wrk1
wrk2
DESIRED
Running
Running
Running
CURRENT
Running 5 mins
Running 5 mins
Running 5 mins
For best results you might want to flip your reading device onto its side.
In doing this Ive had to trim some of the output from some commands, but I dont
think youre missing anything important. However, despite all of this, if youre
reading on a small enough device, youre still going to get some wrapping :-(
the time to learn how it all fits together. If the book was 1,000 printed pages it would
not help you get up to speed quickly!
However, I will add sections to the book if I think theyre important and fundamental
enough. Please use the books feedback pages and hit me up on Twitter with ideas
of what you think should be included in the next version of the book.
Hello VMware!
Amid all of this, VMware, Inc. gave the world the virtual machine (VM). And almost
overnight the world changed into a much better place! Finally we had a technology
that would let us run multiple business applications on a single server safely and
securely.
This was a game changer! IT no longer needed to procure a brand new oversized
server every time the business asked for a new application. More often than not they
could run new apps on existing servers that were sitting around with spare capacity.
All of a sudden we could squeeze massive amounts of value out of existing corporate
assets, such as servers, resulting in a lot more bang for the companys buck.
VMwarts
But and theres always a but! As great as VMs are, theyre not perfect!
The fact that every VM requires its own dedicated OS is a major flaw. Every OS
consumes CPU, RAM and storage that could otherwise be used to power more
applications. Every OS needs patching and monitoring. And in some cases every
OS requires a license. All of this is a waste of op-ex and cap-ex.
The VM model has other challenges too. VMs are slow to boot and portability
isnt great - migrating and moving VM workloads between hypervisors and cloud
platforms is harder than it could be.
Hello Containers!
For a long time, the big web-scale players like Google have been using container
technologies to address these shortcomings of the VM model.
In the container model the container is roughly analogous to the VM. The major
difference through, is that every container does not require a full-blown OS. In fact
all containers on a single system share a single OS. This frees up huge amounts of
system resources such as CPU, RAM, and storage. It also reduces potential licensing
costs and reduces the overhead of OS patching and other maintenance. This results
in savings on the cap-ex and op-ex fronts.
Containers are also fast to start and ultra portable. Moving container workloads from
your laptop, to the cloud, and then to VMs or bare metal in your data center is a
breeze.
Linux containers
Modern containers started in the Linux world* and are the product of an immense
amount of work from a wide variety of people over a long period of time. Just as
one example, Google Inc. has contributed many container-related technologies to
the Linux kernel. Without these, and other contributions, we wouldnt have modern
containers today.
Some of the major technologies that enabled the massive growth of containers in
recent years include kernel namespaces, control groups, and of course Docker.
To re-emphasize what was said earlier - the modern container ecosystem is deeply
indebted to the many individuals and organizations that laid the strong foundations
that we now build on!
Despite all of this, containers remained outside of the reach of most organizations. It
wasnt until Docker came along that containers were effectively democratized and
accessible to the masses.
* There are many operating system virtualization technologies similar to
containers that pre-date Docker and modern containers. Some even date
back to System/360 on the Mainframe. BSD Jails and Solaris Zones are
some other well known examples of Unix-type container technologies.
However, in this section we are restricting our conversation and comments to modern containers that have been made popular by Docker.
Hello Docker!
Well talk about Docker in a bit more detail in the next chapter. But for now its
enough to say that Docker was the magic that made Linux containers usable for
mere mortals. Put another way, Docker, Inc. gave a the world a set of technologies
and tools that made creating and working with containers simple!
Windows containers
Although containers came to the masses via Linux, Microsoft Corp. has worked
extremely hard to bring Docker and container technologies to the Windows platform.
10
At the time of writing, Windows containers are available on the Windows Server
2016 platform. In achieving this, Microsoft has worked closely with Docker, Inc.
The core Windows technologies required to implement containers are collectively
referred to as Windows Containers. The user-space tooling to work with Windows
Containers is Docker. This makes the Docker experience on Windows almost exactly
the same as Docker on Linux. This way developers and sysadmins familiar with
the Docker toolset form the Linux platform will feel right at home using Windows
containers.
Chapter Summary
We used to live in a world where every time the business wanted a new application
we had to but a brand new server for it. Then VMware came along and enabled
IT departments to drive more value out of new and existing company IT assets.
But as good as VMware and the VM model is, its not perfect. Following the
success of VMware and hypervisors came a newer more efficient and lightweight
virtualization technology called containers. But containers were initially hard to
implement and were only found in the data centers of web giants that had Linux
kernel engineers on staff. Then along came Docker Inc. and suddenly container
virtualization technologies were available to the masses.
Speaking of Docker lets go find who, what, and why Docker is!
2: Docker
No book or conversation about containers is complete without talking about Docker.
But when somebody says Docker they can be referring to any of at least three
things:
1. Docker, Inc. the company
2. Docker the container runtime and orchestration technology
3. Docker the open source project
If youre going to make it in the container world, youll need to know a bit about all
three.
Docker, Inc.
Docker, Inc. is the San Francisco based technology startup founded by French-born
American developer and entrepreneur Solomon Hykes.
12
2: Docker
Interestingly, Docker, Inc. started its life as a platform as a service (PaaS) provider
called dotCloud. Behind the scenes, the dotCloud platform leveraged Linux containers. To help them create and manage these containers they built an internal tool that
they nick-named Docker. And thats how Docker was born!
In 2013 the dotCloud PaaS business was struggling and the company was in need of
a new lease of life. To help with this they hired Ben Golub as new CEO, rebranded
the company as Docker, Inc., got rid of the dotCloud PaaS platform, and started a
new journey with a mission to bring to Docker and containers to the world.
Today Docker, Inc. is widely recognized as an innovative technology company with a
market valuation said to be in the region of $1BN. At the time of writing, it has raised
over $180M via 6 rounds of funding from some of the biggest names in Silicon Valley
venture capital. Almost all of this funding was raised after the company pivoted to
become Docker, Inc.
Since becoming Docker, Inc. theyve made several small acquisitions, for undisclosed
fees, to help grow their portfolio of products and services.
At the time of writing, Docker, Inc. has somewhere in the region of 200-300 employees
and holds an annual conference called Dockercon. The goal of Dockercon is to bring
together the growing container ecosystem and drive the adoption of Docker and
container technologies.
Throughout this book well use the term Docker, Inc. when referring to Docker the
company. All other uses of the term Docker will refer to the technology or the
open-source project.
Note: The word Docker comes from a British colloquialism meaning
13
2: Docker
Figure 2.2
The Docker Engine can be downloaded from the Docker website or built from source
from GitHub. Its available on Linux and Windows, with open-source and commercially supported offerings. At the time of writing theres a new major release of the
Docker Engine approximately every three months (https://github.com/docker/docker/wiki).
2: Docker
14
2: Docker
15
This is a way of saying you can swap out a lot of the native Docker stuff and
replace it with stuff from 3rd party ecosystem partners. A good example of this
is the networking stack. The core Docker product ships with built-in networking.
But the networking stack is pluggable meaning you can rip out the native Docker
networking stack and replace it with something else form a 3rd party.
In the early days it was common for 3rd party plugins to be better than the native
offerings that shipped with Docker. However, this presented some business model
challenges for Docker, Inc. After all, Docker, Inc. has to turn a profit at some point to
be a viable long-term business. As a result, the batteries that are included are getting
better and better. This is something that is causing ripples across the wider ecosystem,
which it seems may have expected Docker, Inc. to produce mediocre products and
leave the door wide open for them to swoop in and plunder the spoils.
If that was once true, its not any more. To cut a long story short, the native Docker
batteries are still removable, theres just less and less reason to want to remove them.
Despite this, the container ecosystem is flourishing with a healthy balance of cooperation and competition. Youll often hear people use terms like co-opetition
(a balance of co-operation and competition) and frenemy (a mix of a friend and
an enemy) when talking about the container ecosystem. This is great! Healthy
competition is the mother of innovation!
2: Docker
16
From day one, use of Docker has grown like crazy. More and more people used it in
more and more ways for more and more things. So it was inevitable that somebody
was going to get frustrated. This is normal and healthy.
The TLDR of this history according to Nigel is that a company called CoreOS didnt
like the way Docker did certain things. So they did something about it! They created a
new open standard called appc that defined things like image format and container
runtime. They also created an implementation of the spec called rkt (pronounced
rocket).
This put the container ecosystem in an awkward position with two competing
standards. For want of better terms, the Docker stuff was the de facto standard and
runtime, whereas the stuff from CoreOS was more like the de jure standard.
Getting back to the story though, this all threatened to fracture the ecosystem and
present users and customers with a dilemma. While competition is usually a good
thing, competing standards is not. They cause confusion and slowdown adoption.
Not good for anybody.
With this in mind, everybody did their best to act like adults and came together to
form the OCI - a lightweight agile council to govern container standards.
At the time of writing, the OCI has published two specifications (standards) An image spec
A runtime spec
An analogy thats often used when referring to these two standards is rail tracks.
These two standards are like agreeing on standard sizes and properties of rail
tracks. Leaving everyone else free to build better trains, better carriages, better
signaling systems, better stations all safe in the knowledge that theyll work on
the standardized tracks. Nobody wants two competing standards for rail track sizes!
Its fair to say that the two OCI specifications have had a major impact on the
architecture and design of the core Docker Engine. As of Docker 1.11, the Docker
Engine architecture conforms to the OCI runtime spec.
https://coreos.com
https://github.com/appc/spec/
https://github.com/opencontainers/image-spec
https://github.com/opencontainers/runtime-spec
2: Docker
17
So far, the OCI has achieved good things and gone some way to bringing the
ecosystem together. However, standards always slow innovation! Especially with
new technologies that are developing at close to warp speed. This has resulted in
some raging arguments passionate discussions in the container community. In the
opinion of your author, this is a good thing! The container industry is changing
the world and its normal for the people at the vanguard to be passionate and
opinionated. Expect more passionate discussions about standards and innovation!
The OCI is organized under the auspices of the Linux Foundation and both Docker,
Inc. and CoreOS, Inc. are major contributors.
3: Installing Docker
There are loads of ways and places to install Docker. Theres Windows, theres Mac,
and theres obviously Linux. But theres also in the cloud, on premises, on your laptop.
Not to mention manual installs, scripted installs, wizard-based installs. There literally
are loads of ways and places to install Docker!
But dont let that scare you! Theyre all pretty easy.
In this chapter well cover some of the most important installs:
Desktop installs
Docker for Windows
Docker for Mac
Server installs
Linux
Well add a Windows Server 2016 installation method after Windows Server 2016
has gone G.A. At the time of writing, the installation method for Windows Server
2016 TP5 is in a state of flux and not stable enough to be included here.
3: Installing Docker
19
But a word of caution! Docker for Windows is only intended for test and dev work.
You dont want to run your production estate on it! Remember, its only going to
install a single engine. Thats another way of saying its only going to install one
copy of Docker. You might also find that some of the latest Docker features arent
always available straight away in Docker for Windows. This is because Docker, Inc.
are taking a stability first, features second approach with the product. All of this adds
up to a quick and easy setup, but one that is not for production workloads.
Enough waffle. Lets see how to install Docker for Windows.
First up, pre-requisites. Docker for Windows requires:
20
3: Installing Docker
This will install and enable the Hyper-V and Containers features. Your system may
require a restart.
Figure 3.1
The Containers feature is only available if you are running the summer 2016
Windows 10 Anniversary Update (build 14393).
Once youve installed the Hyper-V and Containers features and restarted you
machine, its time to install Docker for Windows.
1. Head over to www.docker.com and click Get Docker from the top of the
homepage.
2. Click the Learn More button under the WINDOWS section.
3. Click Download Docker for Windows to download the InstallDocker.msi
package to your default downloads directory.
4. Locate and launch the InstallDocker.msi package that you just downloaded.
Step through the installation wizard and provide local administrator credentials to
21
3: Installing Docker
complete the installation. Docker will automatically start as a system service and a
Moby Dock whale icon will appear in the Windows notifications tray.
Congratulations! You have installed Docker for Windows.
Now that Docker for Windows is installed you can open a command prompt or
PowerShell window and run some Docker commands. Try the following commands:
C:\Users\nigelpoulton> docker version
Client:
Version:
1.12.1
API version: 1.24
Go version:
go1.6.3
Git commit:
23cf638
Built:
Thu Aug 18 17:32:24 2016
OS/Arch:
windows/amd64
Experimental: true
Server:
Version:
API version:
Go version:
Git commit:
Built:
OS/Arch:
Experimental:
1.12.1
1.24
go1.6.3
23cf638
Thu Aug 18 17:32:24 2016
linux/amd64
true
Notice that the OS/Arch: for the Server component is showing as linux/amd64 in
the output above. This is because the default installation currently installs the Docker
daemon inside of a lightweight Linux Hyper-V VM. In this default scenario you will
only be able to run Linux containers on your Docker for Windows install.
If you want to run native Windows containers you can right click the Docker whale
icon in the Windows notifications tray and select the option to Switch to Windows
containers.... You may get the following alert if you have not enabled the Windows
Containers feature.
22
3: Installing Docker
Figure 3.2
If you already have the Windows Containers feature enabled it will only take a few
seconds to make the switch. Once the switch has been made the output to the docker
version command will look like this.
C:\Users\nigelpoulton> docker version
Client:
Version:
1.12.1
API version: 1.24
Go version:
go1.6.3
Git commit:
23cf638
Built:
Thu Aug 18 17:32:24 2016
OS/Arch:
windows/amd64
Experimental: true
Server:
Version:
API version:
Go version:
Git commit:
Built:
OS/Arch:
1.13.0-dev
1.25
go1.7.1
c2decbe
Tue Sep 13 15:12:54 2016
windows/amd64
Notice that the Server version is now also showing as windows/amd64. This means
the daemon is now running natively on the Windows kernel and will therefore only
run Windows containers.
3: Installing Docker
23
Also note that the system above is running the experimental version of Docker
(Experimental: true). Docker for Windows has stable and an experimental channel.
You can switch between the two, but you should check the Docker website for
restrictions and implications before doing so.
As shown below, other regular Docker commands work as normal.
C:\Users\nigelpoulton>docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Images: 0
Server Version: 1.13.0-dev
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: nat null overlay
<Snip>
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Docker for Windows includes the Docker Engine (client and daemon), Docker
Compose, and Docker Machine. Use the following commands to verify that each
was successfully installed and which versions of each you have:
C:\Users\nigelpoulton> docker --version
Docker version 1.12.1, build 23cf638, experimental
3: Installing Docker
24
25
3: Installing Docker
Figure 3.3 shows a high level representation of the Docker for Mac architecture.
Figure 3.3
Note: For the curious reader, Docker for Mac leverages HyperKit to
implement a super lightweight hypervisor. HyperKit in turn is based
off the xhive hypervisor. Docker for Mac also leverages features from
DataKit and runs a highly tuned Linux distro called Moby that is based
off of Alpine Linux.
Lets get Docker for Mac installed.
1. Point your browser to www.docker.com
2. Click the Get Docker link near the top of the Docker homepage.
3. Click the Learn More button under the MAC section and then click Download
Docker for Mac. This will download the Docker.dmg installation package to
your default downloads directory.
4. Launch the Docker.dmg file that you downloaded in the previous step. You will
be asked to drag and drop the Moby Dock whale image into the Applications
folder.
https://github.com/docker/hyperkit
https://github.com/mist64/xhyve
https://github.com/docker/datakit
https://alpinelinux.org/andhttps://github.com/alpinelinux
3: Installing Docker
26
5. Open your Applications folder (it may open automatically) and double-click
the Docker application icon to Start it. You may be asked to confirm the action
because the application was downloaded form the internet.
6. Enter your password so that the installer can create components, such as
networking, that require elevated privileges.
7. The Docker daemon will now start.
An animated whale icon will appear in the status bar at the top of your screen,
and the animation will stop when the daemon has successfully started. Once
the daemon has started you can click the whale icon and perform basic actions
such as restarting the daemon, checking for updates, and opening the UI.
Now that Docker for Mac is installed you can open a terminal window and run some
regular Docker commands. Try the commands listed below.
$ docker version
Client:
Version:
1.12.0-rc3
API version: 1.24
Go version:
go1.6.2
Git commit:
91e29e8
Built:
Sat Jul 2 00:09:24 2016
OS/Arch:
darwin/amd64
Experimental: true
Server:
Version:
API version:
Go version:
Git commit:
Built:
OS/Arch:
Experimental:
1.12.0-rc3
1.24
go1.6.2
876f3a7
Tue Jul 5 02:20:13 2016
linux/amd64
true
Notice in the output above that the OS/Arch: for the Server component is showing
as linux/amd64. This is because the server portion of the Docker Engine (a.k.a. the
daemon) is running inside of the Linux VM we mentioned earlier. The Client
3: Installing Docker
27
component is a native Mac application and runs directly on the Mac OS Darwin
kernel (OS/Arch: darwin/amd64).
Also note that the system is running the experimental version (Experimental: true)
of Docker. Docker for Mac has stable and experimental channels. You can switch
between channels, but you should check the Docker website for restrictions and
implications before doing so.
Run some more Docker commands.
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.0-rc3
<Snip>
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Docker for Mac installs the Docker Engine (client and daemon), Docker Compose,
and Docker machine. The following three commands show you how to verify that
all of these components installed successfully and find out which versions you have.
$ docker --version
Docker version 1.12.0-rc3, build 876f3a7, experimental
$ docker-compose --version
docker-compose version 1.8.0, build d988a55
3: Installing Docker
28
$ docker-machine --version
docker-machine version 0.8.1, build 41b3b25
3: Installing Docker
29
3. Its a good best practice to only use non-root users when working with the
Docker Engine. To do this you need to add your non-root users to the local
docker Unix group on your Linux machine. The commands below show how
to add the npoulton user to the docker group and verify that the operation
succeeded.
$ sudo usermod -aG docker npoulton
$
$ cat /etc/group | grep docker
docker:x:999:npoulton
If you are already logged in as the user that you just added to the docker
group, you will need to log out and log back in for the group membership to
take effect.
Congratulations! Docker is now installed on your Linux machine. Run the following
commands to verify your installation.
3: Installing Docker
30
$ docker --version
Docker version 1.12.1, build 23cf638
$
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
<Snip>
Kernel Version: 4.4.0-36-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 990.7 MiB
Name: ip-172-31-41-77
ID: QHFV:6HK7:VNLZ:RIKE:JWL6:BTIX:GC3V:RAVR:6AO5:RAMT:EJCI:PUA7
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
If the process described above doesnt work for your Linux distro, you can go to the
Docker Docs website and click on the link relating to your distro. This will take you
to the official Docker installation instructions which are usually kept up to date. Be
warned though, the instructions on the Docker website tend use the package manager
and require a lot more steps than the procedure we used above. In fact, if you open
a web browser to https://get.docker.com you will see that its a shell script that does
all of the hard work of installation for you.
Warning: If you install Docker from a source other than the official
Docker repositories, you may end up with a forked version of Docker.
https://docs.docker.com/engine/installation/linux/
3: Installing Docker
31
This is because some vendors and distros choose to fork the Docker
project and develop their own slightly customized versions. You need to
be aware of things like this if you are installing from custom repositories
as you could unwittingly end up in a situation where you are running
a fork that has diverged from the official Docker project. This isnt a
problem as long as this is what you intend to do. If it is not what you
intend, it can lead to situations where modifications and fixes your
vendor makes do not make it back upstream in to the official Docker
project. In these situations you will not be able to get commercial support
for your installation from Docker, Inc. or its authorized service partners.
Chapter Summary
In this chapter you saw how to install docker on Windows 10, Mac OS X, and Linux.
Now that you know how to install Docker you are ready to start working with images
and containers.
Engine check
When you install Docker you get two major components:
the Docker client
the Docker daemon (sometimes called server)
The daemon implements the Docker Remote API. In a default Linux installation
the client talks to the daemon via a local IPC/Unix socket at /var/run/docker.sock.
You can test that the client and daemon are operating and can talk to each other with
the docker version command.
https://docs.docker.com/engine/reference/api/docker_remote_api/
33
1.12.1
1.24
go1.6.3
23cf638
Thu Aug 18 05:33:38 2016
linux/amd64
As long as you get a response back from the Client and Server components you
should be good to go. If you get an error response form the Server component, try
the command again with sudo in front of it: sudo docker version. If it works with
sudo you will need to prefix the remainder of the commands in this chapter with
sudo.
Images
Now lets look at images.
Right now, the best way to think of a Docker image is as an object that contains
an operating system and an application. Its not massively different from a virtual
machine template. A virtual machine template is essentially a stopped virtual
machine. In the Docker world, an image is effectively a stopped container.
Run the docker images command on your Docker host.
34
IMAGE ID
CREATED
SIZE
If you are working from a freshly installed Docker host it will have no images and
will look like the output above.
Getting images onto your Docker host is called pulling. Pull the ubuntu:latest
image to your Docker host with the command below.
$ docker pull ubuntu:latest
latest: Pulling from library/ubuntu
952132ac251a: Pull complete
82659f8f1b76: Pull complete
c19118ca682d: Pull complete
8296858250fe: Pull complete
24e0251a0e2c: Pull complete
Digest: sha256:f4691c96e6bbaa99d...a2128ae95a60369c506dd6e6f6ab
Status: Downloaded newer image for ubuntu:latest
Run the docker images command again to see the ubuntu:latest image you just
pulled.
$ docker images
REPOSITORY
TAG
ubuntu
latest
IMAGE ID
bd3d4369aebc
CREATED
11 days ago
SIZE
126.6 MB
Well get into the details of where the image is stored and whats inside of it in
the next chapter. For now its enough to understand that it contains enough of
an operating system (OS), as well as all the code to run whatever application its
designed for. The ubuntu image that weve pulled has a stripped down version of the
Ubuntu Linux OS including a few of the common Ubuntu utilities.
Its worth noting as well that each image gets its own unique ID. When working
with the image, as we will do in the next step, you can refer to it using either its ID
or name.
35
Containers
Now that we have an image pulled locally on our Docker host, we can use the docker
run command to launch a container from it.
$ docker run -it ubuntu:latest /bin/bash
root@6dc20d508db0:/#
Look closely at the output form the command above. You should notice that your
shell prompt has changed. This is because your shell is now attached to the shell of
the new container - you are literally inside of the new container!
Lets examine that docker run command. docker run tells the Docker daemon to
start a new container. The -it flags tell the daemon to make the container interactive
and to attach our current shell to the shell of the container (well get more specific
about this in the chapter on containers). Next, the command tells Docker that we
want the container to be based on the ubuntu:latest image, and we tell it to run the
/bin/bash process inside the container.
Run the following ps command from inside of the container to list all running
processes.
root@6dc20d508db0:/# ps -elf
F S UID
PID PPID
NI ADDR SZ WCHAN
4 S root
1
0
0 - 4560 wait
0 R root
9
1
0 - 8606 -
STIME TTY
13:38 ?
13:38 ?
TIME CMD
00:00:00 /bin/bash
00:00:00 ps -elf
As you can see from the output of the ps command, there are only two processes
running inside of the container:
PID 1. This is the /bin/bash process that we told the container to run with the
docker run command.
PID 9. This is the ps -elf process that we ran to list the running processes.
36
The presence of the ps -elf process in the output above could be a bit misleading
as it is a short-lived process that dies as soon as the ps command exits. This means
that the only long-running process inside of the container is the /bin/bash process.
Press Ctrl-PQ to exit the container. This will land you back in the shell of your Docker
host. You can verify this by looking at your shell prompt.
Now that you are back at the shell prompt of you Docker host, run the ps -elf
command again.
$ ps -elf
F S UID
4 S root
1 S root
1 S root
1 S root
1 S root
<Snip>
0 R ubuntu
PID
1
2
3
5
7
PPID
0
0
2
2
2
22783 22475
NI
0
0
0
-20
0
ADDR SZ
- 9407
0
0
0
0
0 -
WCHAN
-
9021 -
TIME CMD
00:00:03
00:00:00
00:00:00
00:00:00
00:00:00
/sbin/init
[kthreadd]
[ksoftirqd/0]
[kworker/0:0H]
[rcu_sched]
00:00:00 ps -elf
Notice how many more processes are running on your Docker host compared to the
single long-running process inside of the container.
In a previous step you pressed Ctrl-PQ to exit your shell from the container. Doing
this from inside of a container will exit you form the container without killing it. You
can see all of the running containers on your system using the docker ps command.
$ docker ps
CNTNR ID IMAGE
0b3...41 ubuntu:latest
COMMAND
/bin/bash
CREATED
7 mins ago
STATUS
Up 7 mins
NAMES
tiny_poincare
The output above shows a single running container. This is the container that you
created earlier. The presence of your container in this output proves that its still
running. You can also see that it was created 7 minutes ago and has been running
for 7 minutes.
37
Notice that your shell prompt has changed again. You are back inside the container.
The format of the docker exec command is: docker exec -options <containername or container-id> <command>. In our example we used the -it options to
attach our shell to the containers shell. We referenced the container by name and
told it to run the bash shell.
Exit the container again by pressing Ctrl-PQ.
Your shell prompt should be back to your Docker host.
Run the docker ps command again to verify that your container is still running.
$ docker ps
CNTNR ID IMAGE
0b3...41 ubuntu:latest
COMMAND
/bin/bash
CREATED
9 mins ago
STATUS
Up 9 mins
NAMES
tiny_poincare
Stop the container and kill it using the docker stop and docker rm commands.
38
Verify that the container was successfully deleted by running another docker ps
command.
$ docker ps
CONTAINER ID
IMAGE
COMMAND
CREATED
STATUS
PORTS
NAMES
5: Images
In this chapter well dive a bit deeper into Docker images. The aim of the game here
is to give you a solid working understanding of what Docker images are and how to
work with them.
As this is our first chapter in the Technical section of the book, were going to employ
the three-tiered approach where we split the chapter into three sections:
The TLDR: Two or three quick paragraphs that you can read while standing
in line for a coffee)
The deep dive: The really long bit where we get into the detail
The commands: A quick list of the commands we learned
41
5: Images
Congrats! Youve now got half a clue what a Docker image is :-D Now its time to
dig a bit deeper.
Figure 5.1
42
5: Images
files required to run the container. However, containers are all about being fast and
lightweight. This means that the images theyre built from are usually small and
stripped of all non-essential parts.
For example, Docker images tend not to ship with 6 different shells for you to choose
from - theyll usually ship with a single minimalist shell. They also dont contain a
kernel - all containers running on a Docker host share access to the Docker hosts
kernel. For these reasons we sometimes say images contain just enough operating
system.
An extreme example of how small Docker images can be, might be the official Alpine
Linux Docker image which is currently down at around 5MB. Thats not a typo! It
really is about 5 megabytes! However, a more typical example might be something
like the official Ubuntu Docker image which is currently about 120-130MB.
Pulling images
A cleanly installed Docker host has no images in its local cache (/var/lib/docker/<storage-driver> on Linux hosts). You can verify this with the docker images
command.
$ docker images
REPOSITORY TAG
IMAGE ID
CREATED
SIZE
The act of getting images onto a Docker host is called pulling. So if you want the
latest Ubuntu image on your Docker host, youd have to pull it. Use the commands
below to pull the Alpine and Ubuntu images and then check their sizes.
If you havent added your user account to the local docker Unix group,
you may need to add sudo to the beginning of all of the following
commands.
5: Images
43
As you can see, both images are now present on your Docker host.
Lets look a bit closer at what weve just done.
We used the docker pull command to pull the images. As part of each command we
had to specify which image to pull. So lets take a minute to look at image naming.
To do that we need a bit of background on how we store images.
Image registries
Docker images are stored in image registries. The most common image registry
is Docker Hub. Other registries exist including 3rd party registries and secure onpremises registries, but Docker Hub is the default, and its the one well use in this
book.
Image registries contain multiple image repositories. Image repositories contain
images. That might be a bit confusing, so Figure 5.2 shows a picture of an image
registry containing 3 repositories, and each repository contains a few images.
44
5: Images
Figure 5.2
Docker Hub also has the concept of official repositories and unofficial repositories.
As the name suggests, official repositories contain images that have been vetted by
Docker, Inc. This means they should contain up-to-date high quality secure code that
is well documented and follows best practices.
Unofficial repositories are like the wild-west - theyre controlled by none of the things
on the previous list. Thats not saying everything in unofficial repositories is bad!
Its not! Theres some great stuff in unofficial repositories. You just need to be very
careful before trusting code from them. To be honest, you should always be careful
when getting software from the internet - even images from official repositories.
Most of the popular operating systems and applications have their own official
repositories on Docker Hub. Theyre easy to spot because they live at the top level of
the Docker Hub namespace. The list below contains a few of the official repositories
and shows their URLs that exist at the top level of the Docker Hub namespace:
nginx - https://hub.docker.com/_/nginx/
5: Images
45
busybox - https://hub.docker.com/_/busybox/
redis - https://hub.docker.com/_/redis/
mongo - https://hub.docker.com/_/mongo/
On the other hand, my own personal images live in the wild west of unofficial
repositories and should not be trusted! Below are some examples of images in my
repositories:
nigelpoulton/tu-demo - https://hub.docker.com/r/nigelpoulton/tu-demo/
nigelpoulton/pluralsight-docker-ci - https://hub.docker.com/r/nigelpoulton/pluralsightdocker-ci/
Not only are images in my repositories not vetted, not kept up-to-date, not secure,
and not well documented you should also notice that they dont live at the top
level of the Docker Hub namespace. My repositories all live within a second level
namespace called nigelpoulton.
After all of that, we can finally look at how we address images on the Docker
command line.
In our example form earlier we pulled an Alpine and an Ubuntu image with the
following two commands:
docker pull alpine:latest and docker pull ubuntu:latest
These two commands pull the images tagged as latest from the alpine and
ubuntu repositories.
The following examples show how to pull various different images from official
repositories:
5: Images
46
A couple of points to note about the commands above. Firstly, if you do not specify
an image tag after the repository name, Docker will assume you are referring to the
image tagged as latest. Secondly, the latest tag doesnt have any mystical powers!
Just because an image is tagged as latest does not mean it is the most recent image
in a repository! Moral of the story - take care when using the latest tag!
Pulling images from an unofficial repository is essentially the same - you just need to
prepend the repository name with the Docker Hub username or organization name.
The example below shows how to pull the v2 image from the tu-demo repository
owned by a scary person whose Docker Hub account name is nigelpoulton.
$ docker pull nigelpoulton/tu-demo:v2
//This will pull the image tagged as `v2` from the `tu-demo` repository with\
in the namespace of my personal Docker Hub account.
If you want to pull images from 3rd party registries, you need to prepend the repository name with the DNS name of the registry. For example, if the image in the example above was in the Google Container Registry (GCR) youd need to add gcr.io
before the repository name as follows - docker pull gcr.io/nigelpoulton/tudemo:v2.
You may need to have an account on 3rd party registries and be logged in before you
can pull images from them.
5: Images
47
48
5: Images
Second. Look closely at the IMAGE ID column in the output of the docker images
command. Youll see that there are only two unique image IDs. This means that
even though three tags were pulled, only two images were actually downloaded.
This is because two of the tags refer to the same image. Or put another way, one of
the images has two tags. If you look closely youll see that the v1 and latest tags
have the same IMAGE ID. This means theyre two tags of the same image.
This is a perfect example of the warning we issued earlier about the latest tag. As
we can see, the latest tag in this example refers to the same image as the v1 tag, not
the v2 tag. This means its pointing to the older of the two images - not the newest.
latest is an arbitrary tag and is not guaranteed to point to the newest image in a
repository.
Figure 5.3
There are a few ways to see and inspect the layers that make up an image, and weve
already seen one of them. Lets take a second look at the output of the docker pull
ubuntu:latest command from earlier:
49
5: Images
$ docker pull ubuntu:latest
latest: Pulling from library/ubuntu
952132ac251a: Pull complete
82659f8f1b76: Pull complete
c19118ca682d: Pull complete
8296858250fe: Pull complete
24e0251a0e2c: Pull complete
Digest: sha256:f4691c96e6bbaa99d...28ae95a60369c506dd6e6f6ab
Status: Downloaded newer image for ubuntu:latest
Each line in the output above that ends with Pull complete represents a layer in
the image that was pulled. As we can see, this image has 5 layers. Figure 5.4 below
shows this as a picture.
Figure 5.4
Another way to see the layers that make up an image is to inspect the image with
the docker inspect command. The example below inspects the same ubuntu:lates
image.
50
5: Images
The trimmed output shows 5 layers again. Only this time theyre shown using their
SHA256 hashes. The point being, both commands show that the image has 5 layers.
Note: The docker history command shows the build history of an
image and is not a list of layers in the image. For example, some
commands that appear in an images build history do not result in image
layers being created. Some of these commands (Dockerfile instructions)
include MAINTAINER, ENV, EXPOSE and CMD. Instead of
these commands creating new image layers, their values are stored as
part of the images metadata.
Every layer in a Docker image gets its own unique ID. This is a cryptographic hash
of the layers content. This means that the value of the crypto hash is determined by
the contents of the image - changing the contents of the image changes its hash.
51
5: Images
Using cryptographic content hashes improves security, avoids ID collisions that could
occur if they were randomly generated, and gives us a way to guarantee data integrity
after operations such as docker pull.
All Docker images start with a base layer, and as changes are made and new content is
added, new layers are added on top. As an over-simplified example, you might create
a brand new image based off of Ubuntu Linux 16.04. This would be your images first
layer. If you later add the Python package, this would be added as a second layer at
the top of your image. If you then added a security patch, this would be added as a
third layer at the top. Your image would now have three layers as shown in Figure
5.5 below.
Figure 5.5
Its important to understand that as additional layers are added, the image becomes
the combination of all of the layers. Take a simple example of two layers as shown
in Figure 5.6. Each layer has 3 files, but the overall image has 6 files as it is the
combination of both layers.
52
5: Images
Figure 5.6
Ive shown the image layers in Figure 5.6 in a slightly different way to
previous figures. This is just to make showing files easier.
In the slightly more complex example of the three layered image in Figure 5.7, the
overall image only ends up with 6 files. This is because file 7 in the top layer is an
updated version of file 5 directly below. In this situation, the file in the higher layer
obscures the file directly below it. This allows updated versions of files to be added
as new layers to the image.
53
5: Images
Figure 5.7
5: Images
54
Notice the lines ending in Already exists. This is because Docker is smart enough
recognize when its being asked to pull an image layer that it already has a copy of
locally. In this example Docker pulled the image tagged as latest first. Then when
it went to pull the v1 and v2 images it noticed that it already had some of the layers
that make up those images. This happens because the three images in this repository
are almost identical except for the top layer.
Docker on Linux supports many different filesystems and storage drivers. Each is
free to implement image layering, copy-on-write behavior, and image layer sharing
in its own way. However, the overall result and user experience is essentially the
same.
55
5: Images
in your production environment. You pull the image and apply a fix. But then comes
the mistake, you push the fixed image back to its repository with the same tag as the
vulnerable image! How are you going to know which of your production systems
are running the vulnerable image and which are running the patched image? They
both have the same tag!
This is where image digests come to the rescue.
Docker 1.10 introduced a new content addressable storage model. As part of this
new model all images now get cryptographic content hash. For the purposes of this
discussion well refer to this hash as the digest. Because the digest is a hash of the
contents of the image, it is not possible to change the contents of the image without
the digest also changing. Put another way - digests are immutable. Clearly this avoids
the problem we just talked about.
Every time you pull an image, the docker pull command will include the images
digest as part of the return code. You can also view the digests of images in your
Docker hosts local cache by adding the --digests flag to the docker images
command. These are both shown in the following example.
$ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
e110a4a17941: Pull complete
Digest: sha256:3dcdb92d7432d56604d...6d99b889d0626de158f73a
Status: Downloaded newer image for alpine:latest
$
$ docker images --digests alpine
REPOSITORY TAG
DIGEST
IMAGE ID
CREATED
alpine
latest sha256:3dcd...f73a 4e38e38c8ce0 10 weeks ago
SIZE
4.8 MB
The output above shows the digest for the alpine image as sha256:3dcdb92d7432...889d0626de1
Now that we know the digest of the image, we can use it when pulling the image
again. This will ensure that we get exactly the image we expect!
At the moment there is no native docker command or sub-command that will retrieve
the digest of an image from a remote registry such as Docker Hub. This means the
5: Images
56
only way to determine the digest of an image is to pull it by tag and then make a
note of its digest. This may change in the future.
The example below deletes the alpine:latest image from your Docker host and then
shows how to pull it again using its digest instead of its tag.
$ docker rmi alpine:latest
Untagged: alpine:latest
Untagged: alpine@sha256:3dcdb92d7432...313626d99b889d0626de158f73a
Deleted: sha256:4e38e38c8ce0b8d9...3b0bfe8cfa2321aec4bba
Deleted: sha256:4fe15f8d0ae69e16...b265cd2e328e15c6a869f
$
$ docker pull alpine@sha256:3dcdb92...b313626d99b889d0626de158f73a
sha256:3dcdb92d7432d...e158f73a: Pulling from library/alpine
e110a4a17941: Pull complete
Digest: sha256:3dcdb92d7432d56604...47b313626d99b889d0626de158f73a
Status: Downloaded newer image for alpine@sha256:3dcd...b889d0626de158f73a
Deleting Images
When you no longer need an image you can delete it form your Docker host with
the docker rmi command. rmi is short for remove image.
Delete the Alpine image pulled in the previous step with the docker rmi command.
The example below addresses the image by its ID, this might be different on your
system.
$ docker rmi 4e38e38c8ce0
Untagged: alpine:latest
Untagged: alpine@sha256:3dcdb92d7432d56..d99b889d0626de158f73a
Deleted: sha256:4e38e38c8ce0b8d90...3b0bfe8cfa2321aec4bba
Deleted: sha256:4fe15f8d0ae69e169...b265cd2e328e15c6a869f
If the image you are trying to delete is in use by a running container you will not be
able to delete it. Stop and delete any containers before trying the remove operation
again.
A handy shortcut for cleaning up a system and deleting all images on a Docker host
is to run the docker rmi command and pass it a list of all image IDs on the system
by calling docker images with the -q flag as shown below.
5: Images
57
To understand how this works, download a couple of images and then run docker
images -q.
$ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
e110a4a17941: Pull complete
Digest: sha256:3dcdb92d7432d5...3626d99b889d0626de158f73a
Status: Downloaded newer image for alpine:latest
$
$ docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
952132ac251a: Pull complete
82659f8f1b76: Pull complete
c19118ca682d: Pull complete
8296858250fe: Pull complete
24e0251a0e2c: Pull complete
Digest: sha256:f4691c96e6bba...128ae95a60369c506dd6e6f6ab
Status: Downloaded newer image for ubuntu:latest
$
$ docker images -q
bd3d4369aebc
4e38e38c8ce0
See how docker images -q returns a list containing just the image IDs of all images
pulled locally on the system. Returning this list to docker rmi will therefore delete
all images on the system as shown below.
5: Images
58
Lets remind ourselves of the major commands we use to work with Docker images.
Chapter summary
In this chapter we learned about Docker images. We learned that images are made up
one or more read-only layers that when stacked together make up the overall image.
5: Images
59
We used the docker pull command to pull them into our Docker hosts local cache
and we covered image naming conventions. Then we learned about image layers and
how they can be shared among multiple images. We then covered the most common
commands used for working with images.
In the next chapter well take a similar tour of containers - the runtime cousin of
images.
6: Containers
Now that we know a bit about images, the next logical step is get into containers. As
this is a book about Docker, well be talking specifically about Docker containers.
However, the Docker project has recently been hard at work implementing the
image and container specs published by the Open Container Initiative (OCI) at
https://www.opencontainers.org. This means some of what you learn here will apply
to other container runtimes that are OCI compliant.
Lets go and learn about containers!
61
6: Containers
Figure 6.1
The most basic way to start a container is with the docker run command. The
command can take a lot of arguments, but in its most basic form you tell it an image
to use and a command to run: docker run <image> <command>. This next command
will start an Ubuntu Linux container running the Bash shell: docker run ubuntu
/bin/bash.
Containers run until the command they are executing exits. You can manually stop
a container with the docker stop command, and then restart it with docker start.
To get rid of a container forever you have to explicitly delete it using docker rm.
Thats the elevator pitch! Now lets get into the detail
6: Containers
62
the advantages the container model has over the VM model. But Im
guessing a lot of you will be VM experts with a lot invested in the VM
ecosystem. And Im guessing that one or two of you might want to fight
me over some of the things I say. So let me be clear Im a big guy
and Id beat you down in hand-to-hand combat :-D Just kidding. What I
meant to say was that Im not trying to destroy your empire or call your
baby ugly! Im trying to help. The whole reason for me writing this book
is to help you get started with Docker and containers!
Anyway, here we go.
Containers vs VMs
Containers and VMs both need a host to run on. This can be anything from your
laptop, a bare metal server in your data center, all the way up to an instance the
public cloud. In this example well assume a single physical server that we need to
run 4 business applications on.
In the VM model, the physical server is powered on and the hypervisor boots (were
skipping the BIOS and bootloader code etc.). Once the hypervisor boots it lays claim
to all physical resources on the system such as CPU, RAM, storage, and NICs. The
hypervisor then carves these hardware resources into virtual versions that look smell
and feel exactly like the real thing. It then packages them into a software construct
called a virtual machine (VM). We then take those VMs and install an operating
system and application on each one. We said we had a single physical server and
needed to run 4 applications, so wed create 4 VMs, install 4 operating systems, and
then install the 4 applications. When its all done it looks a bit like Figure 6.2.
63
6: Containers
Figure 6.2
64
6: Containers
Figure 6.3
At a high level we can say that hypervisors perform hardware virtualization they carve up physical hardware resources into virtual versions. Whereas containers
perform OS virtualization - they carve up OS resources into virtual versions.
The VM tax
Lets build on what we just covered and drill into one of the main problems with the
hypervisor model.
We started out with the same physical server and requirement to run 4 business
applications. In both models we installed either an OS or a hypervisor (obviously a
hypervisor is a type of OS that is highly tuned for VMs). So far the models are almost
identical. But this is where the similarities stop.
The VM model then carves low-level hardware resources into VMs. Each VM is a
software construct containing virtual CPU, virtual RAM, virtual disk etc. As such,
every VM needs its own OS to claim, initialize and manage all of those virtual
resources. And sadly, every OS comes with its own set of baggage and overheads.
For example, every OS consumes a slice of CPU, a slice of RAM, a slice of storage
etc. Most need their own licenses as well as people and infrastructure to patch and
upgrade them. Each OS also presents a sizable attack surface. We often refer to all of
this as the OS tax, or VM tax - every OS you install consumes resources!
The container model only has a single kernel down at the host OS layer. Its possible
to run tens or hundreds of containers on a single host with every container sharing
6: Containers
65
that single OS kernel. That means a single OS that consumes CPU, RAM, and storage.
A single OS that needs licensing. A single OS that needs upgrading and patching. And
a single OS kernel presenting an attack surface. All in all, a single OS tax bill!
That might not seem a lot in our example of a single server needing to run 4 business
applications. But when were talking about hundreds or thousands of apps (VM or
containers) this can be game changing.
Another thing to consider is that because a container isnt a full-blown OS, it starts
much faster than a VM. Remember, theres no kernel inside of a container that
needs locating, decompressing, and initializing - not to mention all of the hardware
enumerating and initializing associated with a normal kernel bootstrap. None of that
is needed when starting a container! The single shared kernel down at the OS level is
already started! Net result, containers can start in less than a second. The only thing
that has an impact on container start time is the time it takes to start the application
its running.
This all amounts to the container model being leaner and more efficient than the VM
model. We can pack more applications onto less resources, start them faster, and pay
less in licensing and admin costs, as well as present less of an attack surface to the
dark side. All of which is better for the business!
With that theory out of the way, lets have a play around with some containers.
Running containers
To follow along with these examples youll need a working Docker host. For most of
the commands it wont make a difference if its Linux or Windows. However, when
writing the book I used a Docker host running Ubuntu 16.04 for all examples.
66
6: Containers
$ docker version
Client:
Version:
1.12.1
API version: 1.24
Go version:
go1.6.3
Git commit:
23cf638
Built:
Thu Aug 18 05:33:38 2016
OS/Arch:
linux/amd64
Server:
Version:
API version:
Go version:
Git commit:
Built:
OS/Arch:
1.12.1
1.24
go1.6.3
23cf638
Thu Aug 18 05:33:38 2016
linux/amd64
As long as you get a response back in the Client and Server sections you should
be good to go. If you get an error code in the Server section theres a good chance
that the docker daemon (server) isnt running, or that your user account doesnt have
permission to access it.
If your user account doesnt have permission to access the daemon, you need to
make sure its a member of the local docker Unix group. If it isnt, you can add it
with usermod -aG docker <user> and then youll have to logout and log back in to
your shell for the changes to take effect.
If your user account is already a member of the local docker group then the problem
might be that the Docker daemon isnt running. To check the status of the Docker
daemon run one of the following commands depending on your Docker hosts
operating system.
67
6: Containers
//Run this command on Linux systems not using Systemd
$ service docker status
docker start/running, process 29393
//Run this command on Linux systems that are using Systemd
$ systemctl is-active docker
active
//Run this command on Windows Server 2016 systems from a PowerShell window
> Get-Service docker
Status
-----Running
Name
---docker
DisplayName
----------Docker Engine
6: Containers
68
69
6: Containers
root@3027eb644874:/# ls -l
total 64
drwxr-xr-x
2 root root 4096 Aug 19 00:50
drwxr-xr-x
2 root root 4096 Apr 12 20:14
drwxr-xr-x
5 root root 380 Sep 13 00:47
drwxr-xr-x 45 root root 4096 Sep 13 00:47
drwxr-xr-x
2 root root 4096 Apr 12 20:14
drwxr-xr-x
8 root root 4096 Sep 13 2015
drwxr-xr-x
2 root root 4096 Aug 19 00:50
drwxr-xr-x
2 root root 4096 Aug 19 00:50
2 root root 4096 Aug 19 00:50
drwxr-xr-x
drwxr-xr-x
2 root root 4096 Aug 19 00:50
0 Sep 13 00:47
dr-xr-xr-x 129 root root
drwx-----2 root root 4096 Aug 19 00:50
drwxr-xr-x
6 root root 4096 Aug 26 18:50
drwxr-xr-x
2 root root 4096 Aug 26 18:50
2 root root 4096 Aug 19 00:50
drwxr-xr-x
dr-xr-xr-x 13 root root
0 Sep 13 00:47
drwxrwxrwt
2 root root 4096 Aug 19 00:50
drwxr-xr-x 11 root root 4096 Aug 26 18:50
drwxr-xr-x 13 root root 4096 Aug 26 18:50
root@3027eb644874:/#
root@3027eb644874:/#
root@3027eb644874:/# ping www.docker.com
bash: ping: command not found
root@3027eb644874:/#
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
Container processes
When we started the container in the previous section we told it to run the Bash shell
(/bin/bash). This makes the Bash shell the one and only process running inside
of the container. You can see this by running ps -elf from inside the container.
70
6: Containers
root@3027eb644874:/# ps -elf
F S UID
PID PPID
NI ADDR SZ WCHAN
4 S root
1
0
0 - 4558 wait
0 R root
11
1
0 - 8604 -
STIME TTY
00:47 ?
00:52 ?
TIME
00:00:00
00:00:00
CMD
/bin/bash
ps -elf
Although it might look like there are two processes running in the output above,
there re not. The first process in the list, with PID 1, is the Bash shell we told the
container to run. The second process in the list is the ps -elf command we ran to
produce the list. This is a short-lived process that has already exited by the time the
output is displayed on the terminal. Long story short, this container is running a
single process - /bin/bash.
Note: Windows containers are slightly different and tend to run quite a
few processes.
This means that if you type exit to exit the Bash shell, the container will terminate.
The reason for this is that a container cannot exist without a running process - killing
the Bash shell would kill the containers only process, resulting in the container also
being killed.
Press Ctrl-PQ to exit the container without terminating it. Doing this will place
you back in the shell of your Docker host and leave the container running in
the background. You can use the docker ps command to view the list of running
containers on your system.
$ docker ps
CNTNR ID IMAGE
302...74 ubuntu:latest
COMMAND
/bin/bash
CREATED
6 mins
STATUS
Up 6mins
NAMES
sick_montalcini
Its important to understand that this container is still running and you can re-attach
your terminal to it with the docker exec command.
$ docker exec -it 3027eb644874 bash
root@3027eb644874:/#
6: Containers
71
As you can see, the shell prompt has changed back to the container. If you run the
ps -elf command again you will now see two Bash processes. This is because the
docker exec command created a new Bash process and attached to that. This means
that typing exit from this Bash prompt will not terminate the container because the
original Bash process with PID 1 will continue running.
Type exit to leave the container and verify its still running with a docker ps.
If you are following along with the examples on your own Docker host you should
stop and delete the container with the following two commands (you will need to
substitute the ID of your container).
$ docker stop 3027eb64487
3027eb64487
$ docker rm 3027eb64487
3027eb64487
Container lifecycle
Its a common myth that containers cant persist data. They can!
A big part of the reason people think containers arent good for persistent workloads
or persisting data is because theyre so freaking good at non-persistent stuff. But
being good at one thing doesnt mean you cant do other things. A lot of VM admins
out there will remember companies like Microsoft and Oracle telling you that you
couldnt run their applications inside of VMs - or at least they wouldnt support you
if you did. I personally wonder if theres a little bit of something similar with the
move to containerization - are there people out there trying to protect their empires
of persistent data and workloads from what they perceive as the threat of containers?
Anyway, in this section well look at the lifecycle of a container - from birth, through
work and vacations, to eventual death.
Weve already seen how to start containers with the docker run command. Lets
start another one so we can walk it through its entire lifecycle.
6: Containers
72
Thats our container created and we named it percy for persistent :-S
Now lets put it to work by writing some data to it.
From within the shell of your new container follow the procedure below to write
some data to a new file in the tmp directory and verify that the write operation
succeeded.
root@9cb2d2fd1d65:/# cd tmp
root@9cb2d2fd1d65:/tmp#
root@9cb2d2fd1d65:/tmp# ls -l
total 0
root@9cb2d2fd1d65:/tmp#
root@9cb2d2fd1d65:/tmp# echo "sysadmins FTW" > newfile
root@9cb2d2fd1d65:/tmp#
root@9cb2d2fd1d65:/tmp# ls -l
total 4
-rw-r--r-- 1 root root 14 Sep 13 04:22 newfile
root@9cb2d2fd1d65:/tmp#
root@9cb2d2fd1d65:/tmp# cat newfile
sysadmins FTW
You can use the containers name or ID with the docker stop command. The format
is docker stop <container-id or container-name>.
Now run a docker ps.
73
6: Containers
$ docker ps
CONTAINER ID
IMAGE
COMMAND
CREATED
STATUS
PORTS
NAMES
The container is not listed in the output above because its in the stopped state.
Run the same command again, only this time add the -a flag to show all containers
including those that are stopped.
$ docker ps -a
CNTNR ID IMAGE
9cb...65 ubuntu:latest
COMMAND
/bin/bash
CREATED
4 mins
STATUS
Exited (0)
NAMES
percy
Now we can see the container showing as Exited (0). Stopping a container is
like stopping a virtual machine. Although its not currently running, its entire
configuration and contents still exist on the filesystem of the Docker host and it
can be restarted at any time.
Lets use the docker start command to bring it back from vacation.
$ docker start percy
percy
$
$ docker ps
CONTAINER ID IMAGE
9cb2d2fd1d65 ubuntu:latest
COMMAND
"/bin/bash"
CREATED
4 mins
STATUS
Up 3 secs
NAMES
percy
The stopped container is now restarted. Time to verify that the file we created earlier
still exists. Connect to the restarted container with the docker exec command.
$ docker exec -it percy bash
root@9cb2d2fd1d65:/#
Your shell prompt will change to show that you are now operating within the
namespace of the container.
Verify that the file you created earlier is still there and contains the data you wrote
to it.
6: Containers
74
root@9cb2d2fd1d65:/# cd tmp
root@9cb2d2fd1d65:/# ls -l
-rw-r--r-- 1 root root 14 Sep 13 04:22 newfile
root@9cb2d2fd1d65:/#
root@9cb2d2fd1d65:/# cat newfile
sysadmins FTW
As if by magic the file you created is still there and the data it contains is exactly
how you left it! This proves that stopping a container does not destroy the container
or the data inside of it.
Now I should point out that there are better and more recommended ways to store
data in containers. But at this stage of our journey I think this is an effective example
of the persistent nature of containers.
So far I think youd be hard pressed to draw a major difference in the behavior of a
container vs a VM.
Now lets kill the container and delete it from our system.
It is possible to delete a running container with a single command by passing the
-f flag to docker rm. However, its considered a best practice to take the twostep approach of stopping the container first and then deleting it. This gives the
application/process that the container is running a fighting chance of stopping
cleanly. More on this in a second.
The example below will stop the percy container, delete it, and verify the operation.
If your terminal is still attached to the percy container you will need to get back to
your Docker hosts terminal by pressing Ctrl-PQ.
75
6: Containers
$ docker stop percy
percy
$
$ docker rm percy
percy
$
$ docker ps
CONTAINER ID
IMAGE
COMMAND
CREATED
STATUS
PORTS
NAMES
The container is now deleted - literally wiped off the face of the planet. If it was
a good container, it becomes a VM in the afterlife. If it was a naughty container it
becomes a dumb terminal :-D
To summarize the lifecycle of a container. You can stop, start, pause, and restart
a container as many times as you want. And itll all happen really fast. But the
container and its data will always be safe. Its not until you explicitly kill a container
that you run any chance of losing its data. And even then, if youre storing data in a
volume, that datas going to persist even after the container has gone.
Lets quickly mention why we recommended a two-stage approach of stopping the
container before deleting it.
6: Containers
76
gives the process a chance to clean things up and gracefully shut itself down. If it
doesnt exit within 10 seconds it will receive a SIGKILL. This is effectively the bullet
to the head. But hey, it got 10 seconds to sort itself out first.
docker rm <container> -f doesnt bother asking nicely with a SIGTERM, it just
goes straight to the SIGKILL. Like we said a second ago, this is like creeping up from
behind and smashing it over the head. Im not a violent person by then way!
https://app.pluralsight.com/library/search?q=nigel+poulton
77
6: Containers
Notice that your shell prompt hasnt changed. This is because we started this
container in the background with the -d flag. Starting a container in the background
does not attach it to your terminal.
This example threw a few more arguments at the docker run command, so lets take
a quick look at them.
We know docker run starts a new container. But this time we give it the -d flag
instead of -it. -d tells the container to run in the background rather than attaching
to your terminal in the foreground. The d stands for daemon mode, and -d and
-it are mutually exclusive. This means you cant use both on the same container
- for obvious reasons you cannot start a container in the background and in the
foreground at the same time.
After that, we name the container and then give it -p 80:8080. The -p flag maps
ports on the Docker host to ports in the container. This time were mapping port 80
on the Docker host to port 8080 inside the container. This means that traffic hitting
the Docker host on port 80 will be directed to port 8080 inside of the container. It just
so happens that the image were using for this container defines a web service that
listens on port 8080. This means our container will come up running a web server
listening on port 8080.
Finally we tell it which image to use.
Running a docker ps command will show the container as running and show the
ports that are mapped. Its important to know that port mappings are expressed as
host-port:container-port.
$ docker ps
CONTAINER ID
6efa1838cd51
COMMAND
/bin/sh -c...
STATUS
Up 2 mins
PORTS
0.0.0.0:80->8080/tcp
NAMES
webserver
Weve removed some of the columns form the output above to help with
readability.
Now that the container is running and ports are mapped, we can connect to the
container by pointing a web browser at the IP address or DNS name of the Docker
host on port 80. Figure 6.4 shows the web page that is being served up by the
container.
78
6: Containers
Figure 6.4
The same docker stop, docker pause, docker start, and docker rm commands can
be used on the container, and the same rules of persistence apply - stopping or pausing
the container does not destroy the container or any data stored in it.
Inspecting containers
In the previous example you might have noticed that we didnt specify a command
for the container when we issued the docker run. Yet the container ran a simple web
service. How did this happen?
When building a Docker image its possible to embed a default command or
process you want containers using the image to run. If we run a docker inspect
command against the image we used to run our container, well be able to see the
command/process that the container will run when it starts.
79
6: Containers
$ docker inspect nigelpoulton/pluralsight-docker-ci
[
{
"Id": "sha256:07e574331ce3768f30305519...49214bf3020ee69bba1",
"RepoTags": [
"nigelpoulton/pluralsight-docker-ci:latest"
<Snip>
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) CMD [\"/bin/sh\" \"-c\" \"cd /src \u0026\u0026 node \
./app.js\"]"
],
<Snip>
Weve snipped out the output to make is easier to find the information were
interested in.
The entries after Cmd show the command(s) that the container will run unless you
override the with a different command as part of docker run. If you remove all of
the shell escapes in the example above, you get the following command /bin/sh -c
"cd /src \u0026\u0026 node ./app.js".
Its common to build images with default commands like this as it makes starting
containers easier, forces a default behavior, and is a form of self documentation for
the image.
Thats us done for the examples in this chapter. Lets see a quick way to tidy our
system up.
Tidying up
Here were going to show you the simplest and quickest way to get rid if every
running container on your Docker host. Be warned though, the procedure will
6: Containers
80
forcible destroy all containers without giving them a chance to clean up. This should
never be performed on production systems or systems running important
containers.
Run the following command from the shell of your Docker host to delete all
containers.
$ docker rm $(docker ps -aq) -f
6efa1838cd51
In this example we only had a single container running, so only one was deleted
(6efa1838cd51). However, the command works the same way as the docker rmi
$(docker images -q) command we used in the previous chapter to delete all
images on a single Docker host. We already know the docker rm command deletes
containers. Passing it $(docker ps -aq) as an argument effectively passes it the ID
of every container on the system. The -f flag forces the operation so that running
containers will also be destroyed. Net result all containers, running or stopped, will
be destroyed and removed from the system.
6: Containers
81
docker stop will stop a running container and put it in the (Exited (0))
state. It does this by issuing a SIGTERM to the process with PID 1 inside of the
container. If the process has not cleaned up and stopped within 10 seconds,
a SIGKILL will be issued to forcibly stop the container. docker stop accepts
container IDs and container names as arguments.
docker start will restart a stopped (Exited) container. You can give docker
start the name or ID of a container.
docker rm will delete a stopped container. You can specify containers by name
or ID. It is recommended that you stop a container with the docker stop
command before deleting it with docker rm.
docker inspect will show you detailed configuration and runtime information about a container. It accepts container names and container IDs as its main
argument. You can also use docker inspect with Docker images.
Chapter summary
In this chapter we compared and contrasted the container and VM models. We looked
at the OS tax problem of the VM model and saw how the container model can bring
huge efficiencies in much the same way as the VM model brought huge advantages
over the physical model.
We saw how to use the docker run command to start a couple of simple containers,
and we saw the difference between interactive containers in the foreground versus
containers running in the background.
We know that killing the process with PID 1 inside of a container will kill the
container. And weve seen how to start, stop, and delete containers.
We finished the chapter using the docker inspect command to view detailed
configuration metadata.
So far so good!
In the next chapter well see how to orchestrate containerized applications across
multiple Docker hosts with some game changing technologies introduced in Docker
1.12.
7: Swarm mode
Now that we know how to install Docker, pull images, and work with containers, the
next thing we need is a way to work with it all at scale. Thats where orchestration
and swarm mode comes into the picture.
As usual, well take a three-stage approach with a high level explanation at the top,
followed by a longer section with all the detail and some examples, and well finish
things up with a list of the main commands we learned.
83
7: Swarm mode
Figure 7.1
7: Swarm mode
84
Backward compatibility
Introducing swarm mode was massively important for Docker, Inc. But so is
maintaining backward compatibility! This led them to make swarm mode entirely
optional in Docker 1.12. A standard installation of the Docker Engine would default
to running in single engine mode, ensuring 100% backward compatibility with
previous versions of Docker.
This is great news if youre a user or developer of 3rd party clustering tools and the
likes. As long as you keep Docker 1.12 and later in single-engine mode, all of your
existing tools and apps will work as normal! However, as soon as you take the plunge
and put your Docker Engine into swarm mode you risk breaking those 3rd party tools
and apps.
In short, putting a Docker Engine into swarm mode gives you all of the latest
orchestration goodness, it just comes at the price of backward compatibility.
7: Swarm mode
85
Lab setup
For the remainder of this chapter well build the lab shown in Figure 7.2 with 6-nodes
configured as 3 managers and 3 workers. Each node is running Linux with Docker
1.12 or higher. All nodes in the lab can communicate over the network.
86
7: Swarm mode
Figure 7.2
The names and IP addresses are not important and can be different in your lab. If
you are following along with the examples, just remember to substitute them with
your own.
87
7: Swarm mode
STATUS
Ready
AVAILABILITY
Active
MANAGER STATUS
Leader
Notice that mgr1 is currently the only node in the swarm and is listed as the
Leader. Well come back to this in a second.
3. From magr1 run the docker swarm join-token command to extract the
commands and tokens required to add new workers and managers to the
swarm.
7: Swarm mode
88
Notice that the commands to join a worker and a manager are identical apart
from the join tokens (SWMTKN). This means that whether a node joins as a
worker or a manager depends entirely on which token you use when joining
it.
4. Log on to wrk1 and join it to the swarm using the docker swarm join
command with the token used for joining workers.
$ docker swarm join \
--token SWMTKN-1-0uahebax...c87tu8dx2c \
10.0.0.1:2377 \
--advertise-addr 10.0.0.4:2377 \
--listen-addr 10.0.0.4:2377
This node joined a swarm as a worker.
89
7: Swarm mode
$ docker swarm join \
--token SWMTKN-1-0uahebax...ue4hv6ps3p \
10.0.0.1:2377 \
--advertise-addr 10.0.0.2:2377 \
--listen-addr 10.0.0.1:2377
This node joined a swarm as a manager.
7. Repeat the previous step on mgr3 remembering to use mgr3s IP address for
the advertise-addr and --listen-addr flags.
8. List the nodes in the swarm by running docker node ls from any of the
manager nodes in the swarm.
$ docker node ls
ID
HOSTNAME
0g4rl...babl8 * mgr2
2xlti...l0nyp
mgr3
8yv0b...wmr67
wrk1
9mzwf...e4m4n
wrk3
d21ly...9qzkx
mgr1
e62gf...l5wt6
wrk2
STATUS
Ready
Ready
Ready
Ready
Ready
Ready
AVAILABILITY
Active
Active
Active
Active
Active
Active
MANAGER STATUS
Reachable
Reachable
Leader
Congratulations! Youve just created a 6-node swarm with 3 managers and 3 workers.
As part of the process you put the Docker Engine on each node into swarm mode. As
an added bonus, the swarm is automatically secured with TLS.
If you look in the MANAGER STATUS column in the previous output youll see that
the three manager nodes are showing as either Reachable or Leader. Well learn
more about leaders shortly. Nodes with nothing in the MANAGER STATUS column are
workers. Also note the asterisk (*) after the ID on the line showing mgr2. This shows
us which node we ran the docker node ls command from. In this instance the
command was issued from mgr2.
Note: Its a pain to specify the --advertise-addr and --listen-addr
flags every time you join a node to the swarm. However, it can be even
more of a pain if you get the network configuration of your swarm
wrong. Manually adding nodes to a swarm is unlikely to be a daily task
7: Swarm mode
90
so I think its worth the extra up-front effort to use the flags. Its your
choice though. In lab environments or nodes with only a single IP you
do not need to use the flags.
Now that we have a swarm up and running, lets take a look at manager high
availability.
91
7: Swarm mode
Figure 7.3
92
7: Swarm mode
Figure 7.4
As with all consensus algorithms - more participants means more time required to
achieve consensus. Its like deciding where to eat - its always quicker and easier for 3
people to decide than it is for 33! With this in mind, its a best practice to have either 3
or 5 managers for HA. 7 might work, but its generally accepted that 3 or 5 is optimal.
You definitely dont want more than 7 as the time taken to achieve consensus will be
longer.
A final word of caution regarding manager HA. While its obviously a good practice
to spread your managers across availability zones within your network, you need to
make sure that the networks connecting them are reliable! Network partitions can
be a royal pain in the backside! This means, at the time of writing, the nirvana of
hosting your active production applications and infrastructure across multiple cloud
providers such as AWS and Azure is a bit of a daydream. Take time to make sure
your managers are connected via high speed reliable networks!
Now that weve got our swarm built and understand the concepts of leaders and
manager HA, lets move on to services.
7: Swarm mode
93
Services
Like we said in the Swarm primer services are a new construct introduced with
Docker 1.12 that only exist in swarm mode.
They let us declare the desired state for a group of containers (tasks) and feed that to
Docker. For example, assume youve got an app that has a web front-end. You have
an image for the web server, and testing has shown that you will need 5 instances of
the web service to handle normal daily traffic. You would translate this requirement
into a service declaring the image the containers should use, and that the service
should always have 5 running tasks.
Well see some of the other things that can be declared as part of a service in a minute,
but before we do that, lets see how to create the one we just described.
We create a service with the docker service create command.
$ docker service create --name web-fe \
-p 8080:8080 \
--replicas 5 \
nigelpoulton/pluralsight-docker-ci
2kffzpz721nrjikmxqhj474qg
94
7: Swarm mode
But this isnt the end. All services are constantly monitored by the swarm - the swarm
runs a reconciliation loop that constantly compares the actual state of the service to
the desired state. If the two states match, the world is a happy place and no further
actions need taking. If they dont match, the swarm takes actions so that they do. Put
another way, the swarm is constantly making sure that actual state matches desired
state.
As an example, if one of the workers hosting one of the 5 web-fe container tasks
fails, the actual state for the web-fe service will drop from 5 running tasks to 4. This
will no longer match the desired state of 5, and Docker will start a new web-fe task
to bring actual state back in line with desired state. This behavior is very powerful
and allows the service to self-heal in the event of node failures and the likes.
REPLICAS
5/5
IMAGE
nigelpoulton/plur...cker-ci
COMMAND
The output above shows a single running service as well as some basic information
about state. Among other things, we can see the name of the service and that 5 out
of the 5 desired tasks/replicas are in the running state. If you run this command soon
after deploying the service it might not show all tasks/replicas as running. This is
probably because of the time it takes to pull the image on each node.
You can use the docker service ps command to see a list of tasks in a service and
their state.
95
7: Swarm mode
$ docker service ps web-fe
ID
NAME
IMAGE
817...f6z web-fe.1 nigelpoulton/...
a1d...mzn web-fe.2 nigelpoulton/...
cc0...ar0 web-fe.3 nigelpoulton/...
6f0...azu web-fe.4 nigelpoulton/...
dyl...p3e web-fe.5 nigelpoulton/...
NODE
mgr2
wrk1
wrk2
mgr3
mgr1
DESIRED
Running
Running
Running
Running
Running
CURRENT
Running
Running
Running
Running
Running
2
2
2
2
2
mins
mins
mins
mins
mins
The format of the command is docker service ps <service-name or serviceid>. The output displays each task on its own line, shows which node in the swarm
its executing on, and shows desired state and actual state.
For detailed information about a service, use the docker service inspect command.
$ docker service inspect --pretty web-fe
ID:
2kffzpz721nrjikmxqhj474qg
Name:
web-fe
Mode:
Replicated
Replicas:
5
Placement:
UpdateConfig:
Parallelism:
1
On failure:
pause
ContainerSpec:
Image:
nigelpoulton/pluralsight-docker-ci
Resources:
Ports:
Protocol = tcp
TargetPort = 8080
PublishedPort = 8080
The example above uses the --pretty flag to limit the output to the most interesting
items printed in an easy-to-read format. Leaving off the --pretty flag will give a
more verbose output.
Well come back to some of these outputs later.
Lets go and see how to scale a service.
96
7: Swarm mode
Scaling a service
Another powerful feature of services is the ability to easily scale them up and down.
Lets assume business is booming and were seeing double the amount of anticipated
traffic hitting the web front-end. Fortunately scaling the web-fe service is as simple
as running the docker service scale command.
$ docker service scale web-fe=10
web-fe scaled to 10
The above command will scale the number of tasks/replicas from 5 to 10. In the
background its updating the services desired state from 5 to 10. Run another docker
service ls command to verify the operation was successful.
$ docker service ls
2kffzpz721nr web-fe
10/10
nigelpoulton/pluralsight-docker-ci
Running a docker service ps command will show that the tasks in the service are
balanced across all nodes in the swarm as evenly as possible.
$ docker service ps web-fe
ID
NAME
IMAGE
817...f6z web-fe.1 nigelpoulton/...
a1d...mzn web-fe.2 nigelpoulton/...
cc0...ar0 web-fe.3 nigelpoulton/...
6f0...azu web-fe.4 nigelpoulton/...
dyl...p3e web-fe.5 nigelpoulton/...
912...vtb web-fe.6 nigelpoulton/...
3wu...o7y web-fe.7 nigelpoulton/...
aso...6hh web-fe.8 nigelpoulton/...
97u...4bn web-fe.9 nigelpoulton/...
a1u...4jj web-fe.10 nigelpoulton/...
NODE
mgr2
wrk1
wrk2
mgr3
mgr1
mgr1
wrk3
wrk3
wrk1
mgr2
DESIRED
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
CURRENT
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
5
5
5
5
5
1
1
1
1
1
mins
mins
mins
mins
mins
min
min
min
min
min
Behind the scenes, swarm-mode runs a scheduling algorithm that defaults to trying
to balance tasks as evenly as possible across the nodes in the swarm. At the time of
97
7: Swarm mode
writing, this amounts to running an equal number of tasks on each node without
taking into consideration things like CPU load etc.
Run another docker service scale command to bring the number back down from
10 to 5.
$ docker service scale web-fe=5
web-fe scaled to 5
Now that we know how to scale a service, lets see how we remove one.
Removing a service
Removing a service is simple - may be too simple.
The following docker service rm command will delete the service we deployed
earlier.
$ docker service rm web-fe
web-fe
COMMAND
Be careful using the docker service rm command as it deletes all tasks in a service
without asking for confirmation.
Now that the service is deleted from the system, lets go and look at how to push
rolling updates to a service.
7: Swarm mode
98
Rolling updates
Pushing updates to deployed applications is a fact of life. And for the longest time
its been really painful. Ive lost more than enough weekends to major application
updates, and Ive no intention of going there again if I can help it.
Well thanks to Docker services, pushing updates to well designed apps just got a
whole lot easier!
To see this, were going to deploy a new service. But before we do that were going
to create a new overlay network for the service. This isnt necessary, but I wanted
you to see how it was done and how the service uses it.
$ docker network create -d overlay uber-net
43wfp6pzea470et4d57udn9ws
This creates a new overlay network called uber-net that well be able to leverage
with the service were about to create. An overlay network essentially creates a new
layer 2 network that we can place containers on, and all containers on it will be able to
communicate with each other. This works even if the Docker hosts theyre running
on are on different underlying networks. Basically the overlay network creates a
new layer 2 container network on top of potentially multiple different underlying
networks.
Figure 7.5 shows two underlay networks connected by a layer 3 router. There is then
a single overlay network across both of them. Docker hosts are connected to the two
underlay networks and containers are connected to the overlay. All containers on
the overlay can communicate with each other even if they are running on Docker
hosts plumbed into different underlay networks.
99
7: Swarm mode
Figure 7.5
Run a docker network ls to verify that the network created properly and is visible
on the Docker host.
$ docker network ls
NETWORK ID
NAME
490e2496e06b
bridge
a0559dd7bb08
docker_gwbridge
a856a8ad9930
host
1ailuc6rgcnr
ingress
be581cd6de9b
none
43wfp6pzea47
uber-net
DRIVER
bridge
bridge
host
overlay
null
overlay
SCOPE
local
local
local
swarm
local
swarm
The uber-net network was successfully created with the swarm scope and is currently
only visible on manager nodes in the swarm.
Lets go and create a new service.
$ docker service create --name uber-svc \
--network uber-net \
-p 80:80 --replicas 12 \
nigelpoulton/tu-demo:v1
dhbtgvqrg2q4sg07ttfuhg8nz
100
7: Swarm mode
Lets see what we just declared with that docker service create command.
The first thing we did was name the service and then use the --network flag to tell it
to place all containers on the new uber-net network. We then exposed port 80 across
the entire swarm and mapped it to port 80 inside of each of the 12 replicas or tasks
we asked it to run. Finally we told it to base all tasks on the nigelpoulton/tu-demo:v1
image.
Run a docker service ls and a docker service ps command to verify the state of
the new service.
$ docker service ls
ID
NAME
REPLICAS IMAGE
dhbtgvqrg2q4 uber-svc 12/12
nigelpoulton/tu-demo:v1
$
$ docker service ps uber-svc
ID
NAME
IMAGE
NODE DESIRED
0v...7e5 uber-svc.1
nigelpoulton/...:v1 wrk3 Running
bh...wa0 uber-svc.2
nigelpoulton/...:v1 wrk2 Running
23...u97 uber-svc.3
nigelpoulton/...:v1 wrk2 Running
82...5y1 uber-svc.4
nigelpoulton/...:v1 mgr2 Running
c3...gny uber-svc.5
nigelpoulton/...:v1 wrk3 Running
e6...3u0 uber-svc.6
nigelpoulton/...:v1 wrk1 Running
78...r7z uber-svc.7
nigelpoulton/...:v1 wrk1 Running
2m...kdz uber-svc.8
nigelpoulton/...:v1 mgr3 Running
b9...k7w uber-svc.9
nigelpoulton/...:v1 mgr3 Running
ag...v16 uber-svc.10
nigelpoulton/...:v1 mgr2 Running
e6...dfk uber-svc.11
nigelpoulton/...:v1 mgr1 Running
e2...k1j uber-svc.12
nigelpoulton/...:v1 mgr1 Running
CURRENT
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
Running
STATE
1 min
1 min
1 min
1 min
1 min
1 min
1 min
1 min
1 min
1 min
1 min
1 min
Passing the service the -p 80:80 flag will ensure that a swarm-wide mapping is
created that maps traffic coming in to any node in the swarm on port 80 through to
port 80 inside of any container in the service.
Open a web browser and point it to the IP address of any of the nodes in the swarm
on port 80 to see the app running in the service.
101
7: Swarm mode
Figure 7.6
As you can see, the application is a simple voting application that will register votes
for either football or soccer. Feel free to point you web browser to other nodes
in the swarm. You will be able to reach the web server from any node in the swam
because the -p 80:80 creates a mapping on every host. This is true even on nodes
that might be running a task for the service - every node gets a mapping and can
therefore redirect your request to a node that runs the service.
Now lets assume that this particular vote has come to an end and your company
is now running a new poll. A new image has been created for the new poll and has
been added to the same Docker Hub repository, but this one is tagged as v2 instead
of v1.
Lets also assume that youve been tasked with pushing the updated image to the
swam in a staged manner - 2 containers at a time with a 20 second delay in between
each batch of 2. We can use the following docker service update command to
accomplish this.
102
7: Swarm mode
$ docker service update \
--image nigelpoulton/tu-demo:v2 \
--update-parallelism 2 \
--update-delay 20s uber-svc
uber-svc
Lets review the command. docker service update lets us make updates to running
services by updating the services desired state. This time we gave it a new image tag
v2 instead of v1. And we used the --update-parallelism and the --upate-dealy
flags to make sure that the new image was pushed to 2 tasks at a time with a 20
second cool-off period in between each pair. Finally we told Docker to make these
changes to the uber-svc service.
If we run a docker service ps against the service well see that some of the tasks
in the service are at v2 while some are at v1. If we give the operation enough time
to complete (4 minutes) all tasks will eventually reach the new desired state of using
the v2 image.
$ docker service ps uber-svc
ID
NAME
IMAGE
7z...nys uber-svc.1
nigel...v2
0v...7e5 \_uber-svc.1 nigel...v1
bh...wa0 uber-svc.2
nigel...v1
e3...gr2 uber-svc.3
nigel...v2
23...u97 \_uber-svc.3 nigel...v1
82...5y1 uber-svc.4
nigel...v1
c3...gny uber-svc.5
nigel...v1
e6...3u0 uber-svc.6
nigel...v1
78...r7z uber-svc.7
nigel...v1
2m...kdz uber-svc.8
nigel...v1
b9...k7w uber-svc.9
nigel...v1
ag...v16 uber-svc.10
nigel...v1
e6...dfk uber-svc.11
nigel...v1
e2...k1j uber-svc.12
nigel...v1
NODE
DESIRED
CURRENT STATE
mgr2 Running
Running 13 secs
wrk3 Shutdown Shutdown 13 secs
wrk2 Running
Running 1 min
wrk2 Running
Running 13 secs
wrk2 Shutdown Shutdown 13 secs
mgr2 Running
Running 1 min
wrk3 Running
Running 1 min
wrk1 Running
Running 1 min
wrk1 Running
Running 1 min
mgr3 Running
Running 1 min
mgr3 Running
Running 1 min
mgr2 Running
Running 1 min
mgr1 Running
Running 1 min
mgr1 Running
Running 1 min
You can witness the update happening in real-time by opening a web browser to
any node in the swarm and hitting refresh several times. Some of the requests will
7: Swarm mode
103
be serviced by containers running the old version and some will be serviced by
containers running the new version. After enough time all requests will be service
by containers running the updated copy of the service.
Congratulations. Youve just pushed a rolling update to a live containerized application.
If you run a docker inspect --pretty command against the service youll see the
update parallelism and update delay settings you just used are now part of the service
definition. This means future updates that you push will automatically use these
settings unless you override them as part of the docker service update command.
$ docker service inspect --pretty uber-svc
ID:
dhbtgvqrg2q4sg07ttfuhg8nz
Name:
uber-svc
Mode:
Replicated
Replicas:
12
Update status:
State:
completed
11 minutes ago
Started:
Completed:
8 minutes ago
Message:
update completed
Placement:
UpdateConfig:
Parallelism:
2
Delay:
20s
On failure:
pause
ContainerSpec:
Image:
nigelpoulton/tu-demo:v2
Resources:
Networks: 43wfp6pzea470et4d57udn9ws
Ports:
Protocol = tcp
TargetPort = 80
PublishedPort = 80
You should also note a couple of things about the services network config. All nodes
in the swarm that are running a task for the service will have the uber-net overlay
7: Swarm mode
104
network that we created earlier. We can verify this by running docker network ls
on any node running a task.
You should also note the Networks portion of the docker inspect output above. This
shows the 43wfp6pzea470et4d57udn9ws uber-net network as well as the swarmwide 80:80 port mapping.
105
7: Swarm mode
never seen the light of day. The underlying code has been around for a while and
was being actively deployed in production environments.
That all said, you should still perform your normal testing before deciding to run
your business critical apps on it!
Clean-up
Lets clean-up our service.
$ docker service rm uber-svc
uber-svc
Verify the uber-svc is no longer running with the docker service ls command.
$ docker service ls
ID NAME REPLICAS IMAGE
COMMAND
7: Swarm mode
106
Chapter summary
In this chapter we learned about swarm mode and how to build a swarm.
We used the docker swarm init command to create a new swarm and make the
node we ran the command on the first manager of that swarm. We then joined
managers and workers. We learned that managers operate in an HA formation and
the recommended number of managers is either 3 or 5.
We learned how to declare services and run them on a swarm. We saw how network
ports are exposed across the entire swarm allowing us to hit any node in the swarm
and reach the service endpoint - even if the node we hit wasnt running a task for
the service.
We wrapped the chapter up by scaling a service up then down, and pushing an update
to a live service using a rolling update.
8: What next
Hopefully youre now comfortable talking about Docker and working with it.
Taking your journey to the next step is simple in todays world. Its insanely easy
to spin up infrastructure and workloads in the cloud where you can build and test
Docker until youre a world authority!
You can also head over to my video training courses at Pluralsight. If youre not
a member of Pluralsight then become one! Yes it costs money, but its definitely a
service where you get value for your money! And if youre unsure they always
have a free trail period where you can get access to my courses for free for a limited
period.
Id also recommend you hit events like Dockercon and your local Docker meetups.
Feedback
A massive thanks for reading my book. I really hope it was useful for you!
On that point, Id love your feedback - good and bad. If you think the book was
amazing Id love you to tell me and others! But I also want to know what you didnt
like about it and how I can make the next version better!!! Please leave comments on
the books feedback pages and feel free to hit me on Twitter with your thoughts!
Thanks again for reading my book and good luck driving your career forward!!
https://app.pluralsight.com/author/nigel-poulton
https://www.dockercon.com
https://www.docker.com/community/meetup-groups
https://twitter.com/nigelpoulton