Mis notas sobre Docker.
- Introducción
- Instalación
- Containers
- Images
- Volumes
- Ciclo de vida de un contenedor (Create/Start/Stop/Kill/Remove)
- Dockerfiles
- Docker Compose: Linkar Containers
- Networking
- Limpieza
- CheatSheets
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.
Docker Engine is a client-server application with these major components:
-
A server which is a type of long-running program called a daemon process (the dockerd command).
-
A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
-
A command line interface (CLI) client (the docker command).
As previously mentioned, Docker uses a client-server architecture.
- The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers.
- The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon.
- The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.
When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. This section is a brief overview of some of those objects.
-
Image:
-
An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
-
An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.
-
You might create your own images. To do that, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image.
-
-
Container:
- A container is a runtime instance of an image.
- Isolation: It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so. By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.
- Containers run apps natively on the host machine’s kernel.
- They have better performance characteristics than virtual machines that only get virtual access to host resources through a hypervisor.
- Containers can get native access, each one running in a discrete process, taking no more memory than any other executable.
Docker is written in Go and takes advantage of several features of the Linux kernel to deliver its functionality.
- Namespaces
Docker uses a technology called namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
Docker Engine uses namespaces such as the following on Linux:
-
The
pid
namespace: Process isolation (PID: Process ID). -
The
net
namespace: Managing network interfaces (NET: Networking). -
The
ipc
namespace: Managing access to IPC resources (IPC: InterProcess Communication). -
The
mnt
namespace: Managing filesystem mount points (MNT: Mount). -
The
uts
namespace: Isolating kernel and version identifiers. (UTS: Unix Timesharing System). -
Control Groups
Docker Engine on Linux also relies on another technology called control groups (cgroups). A cgroup limits an application to a specific set of resources. Control groups allow Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints. For example, you can limit the memory available to a specific container.
- Union File Systems
Union file systems, or UnionFS, are file systems that operate by creating layers, making them very lightweight and fast. Docker Engine uses UnionFS to provide the building blocks for containers. Docker Engine can use multiple UnionFS variants, including AUFS, btrfs, vfs, and DeviceMapper.
- Container Format
Docker Engine combines the namespaces, control groups, and UnionFS into a wrapper called a container format. The default container format is libcontainer. In the future, Docker may support other container formats by integrating with technologies such as BSD Jails or Solaris Zones.
Cuando hablamos de "instalar docker" nos estamos refiriendo a instalar "Docker Engine". Para ello, lo haremos siguiendo los pasos indicados en la Documentación Oficial.
A continuación se muestran los comandos más habituales con "containers":
Muestra los contenedores en ejecución (running).
- Ver containers arrancados:
$ docker ps
- Ver "todo el historial" de containers arrancados:
$ docker ps -a
- Ver el "ID" (-q) del último (-l) container:
$ docker ps -l -q
Para acceder a la shell de un "container" que tuvieramos previamente arrancado:
Indicando su ID:
$ sudo docker exec -i -t 665b4a1e17b6 /bin/bash
o sino también a través de su nombre:
$ sudo docker exec -i -t loving_heisenberg /bin/bash
Crea y arranca un "container". Por defecto un contenedor se arranca, ejecuta el comando que le digamos y se para:
$ docker run busybox echo hello world
Por lo general docker run
tiene la siguiente estructura:
$ docker run -p <puerto_host>:<puerto_container> username/repository:tag
Podemos pasarle varias opciones al comando run
, como por ejemplo:
run
interactivo:
$ docker run -t -i ubuntu:16.04 /bin/bash
- -h: Le configuramos un hostname al "container".
- -t: Asigna una TTY.
- -i: Nos comunicamos con el "container" de manera interactiva.
Nota: Al salir del modo interactivo (-i) el "container" se detendrá.
-
run
Detached Mode:Como ya sabemos, trás correr un contenedor de manera interactiva, este finaliza. Si se quieren hacer contenedores que corran servicios (por ejemplo, un servidor web) el comando es el siguiente:
$ docker run -d -p 1234:1234 python:2.7 python -m SimpleHTTPServer 1234
-
Explicación: Esto ejecuta un servidor Python (SimpleHTTPServer module), en el puerto 1234.
-
-p 1234:1234 le indica a Docker que tiene que hacer un port forwarding del contenedor hacia el puerto 1234 de la máquina host. Ahora podemos abrir un browser en la dirección https://localhost:1234.
-
-d: hace que el "container" corra en segundo plano. Esto nos permite ejecutar comandos sobre el mismo en cualquier momento mientras esté en ejecución. Por ejemplo:
$ docker exec -ti <container-id> /bin/bash
- Aquí simplemente se abre una tty en modo interativo. Podrían hacerse otras cosas como cambiar el working directory, definir variables de entorno, etc.
-
-
Muestra detalles acerca de un contenedor:
- Info sobre un container:
$ docker inspect
- Dirección IP de container:
$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' <nombre_container>
- Mostrar logs sobre un container:
$ docker logs
- Estadísticas del container (CPU,MEM,etc.):
$ docker stats
- Puertos publicos del container:
$ docker port
- Publicar puerto 80 del container en un puerto random del Host:
$ docker run -p 80 nginx
- Publicar puerto 80 del container en el puerto 8080 del Host:
$ docker run -p 8080:80 nginx
- Publicar todos los puerto expuestos del container en puertos random del Host:
$ docker run -P nginx
- Listar todos los mapeos de los puertos de un container:
$ docker port <container_name>
- Ver listado de imagenes:
$ docker images
- Ver "todo el historial" de imágenes arrancadas:
$ docker images -a
- Borrar una imágen:
$ docker rmi <images_name>
- Para conocer el historial y "layers" que tiene una imagen:
$ docker history <image_name>
Podemos crear nuestras propias imágenes de diferentes maneras:
A) docker commit
: build an image from a container.
Ejemplo:
$ docker commit -m "Mensaje que queramos" -a "Nombre del que lo ha hecho" container-id NEW_NAME:TAG
$ docker commit -m "MongoDB y Scrapy instalados" -a "Etxahun" 79869875807 etxahun/scrapy_mongodb:0.1
B) docker build
: create an image from a "Dockerfile" by executing the build steps given in the file.
Dentro de un Dockerfile las "instructions" que podemos utilizar son las siguientes:
- FROM: the base image for building the new docker image; provide "FROM scratch" if it is a base image itself.
- MAINTAINER: the author of the Dockerfile and the email.
- RUN: any OS command to build the image.
- CMD: specify the command to be stated when the container is run; can be overriden by the explicit argument when providing docker run command.
- ADD: copies files or directories from the host to the container in the given path.
- EXPOSE: exposes the specified port to the host machine.
Ejemplo:
$ nano myimage/Dockerfile
FROM ubuntu
RUN echo "my first image" > /tmp/first.txt
$ docker build -f myimage/Dockerfile || o sino || docker build myimage
Sending build context to Doker daemon 2.048 kB
Step 1: FROM ubuntu
----> ac526a456ca4
Step 2: RUN echo "my first image" > /temp/first.txt
----> Running in 18f62f47d2c8
----> 777f9424d24d
Removing intermediate container 18f62f47d2c8
Succesfully built 777f9424d24d
$ docker images | grep 777f9424d24d
<none> <none> 777f9424d24d 4 minutes ago 125.2 MB
$ docker run -it 777f9424d24d
$ root@2dcd9d0caf6f:/#
Podemos ponerle un nombre o "tagear" la imagen en el momento que estemos haciendo el "build":
$ docker build <dirname> -t "<imagename>:<tagname>"
Ejemplo:
$ docker build myimage -t "myfirstimage:latest"
La notación que se suele utilizar para asociar una imagen local con un "repository" dentro de un "registry" es la siguiente: username/repository:tag
. La parte "tag" es opcional, pero recomendable, ya que es la manera en que versionaremos en Docker las imágenes.
Para realizar el "tag" haremos lo siguiente:
$ docker tag image username/repository:tag
Por ejemplo:
docker tag friendlyhello john/get-started:part2
Para comprobar la imagen que acabamos de "tagear":
$ docker images_name
REPOSITORY TAG IMAGE ID CREATED SIZE
friendlyhello latest d9e555c53008 3 minutes ago 195MB
john/get-started part2 d9e555c53008 3 minutes ago 195MB
python 2.7-slim 1c7128a655f6 5 days ago 183MB
...
Para publicar la imagen haremos lo siguiente:
$ docker push username/repository:tagear
Una vez subido, podremos ver en la Web de Docker Hub.
Referencias:
It is possible to store data within the writable layer of a container, but there are some downsides:
-
The data won’t persist when that container is no longer running, and it can be difficult to get the data out of the container if another process needs it.
-
A container’s writable layer is tightly coupled to the host machine where the container is running. You can’t easily move the data somewhere else.
-
Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
Docker offers three different ways to mount data into a container from the Docker host:
- Volumes
- Bind mounts
- Tmpfs volumes
When in doubt, volumes are almost always the right choice.
No matter which type of mount you choose to use, the data looks the same from within the container. It is exposed as either a directory or an individual file in the container’s filesystem.
An easy way to visualize the difference among volumes
, bind mounts
, and tmpfs mounts
is to think about where the data lives on the Docker host.
-
Volumes
are stored in a part of the host filesystem which is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.Volumes
are created and managed by Docker. You can create a volume explicitly using thedocker volume create
command, or Docker can create a volume during container or service creation.When you create a volume, it is stored within a directory on the Docker host. When you mount the volume into a container, this directory is what is mounted into the container. This is similar to the way that bind mounts work, except that volumes are managed by Docker and are isolated from the core functionality of the host machine.
A given volume can be mounted into multiple containers simultaneously. When no running container is using a volume, the volume is still available to Docker and is not removed automatically. You can remove unused volumes using
docker volume prune
.When you mount a volume, it may be named or anonymous. Anonymous volumes are not given an explicit name when they are first mounted into a container, so Docker gives them a random name that is guaranteed to be unique within a given Docker host.
-
Bind mounts
may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.Bind mounts
have limited functionality compared tovolumes
. When you use abind mount
, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full path on the host machine.The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist. Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available.
If you are developing new Docker applications, consider using named
volumes
.Warning: One side effect of using bind mounts, for better or for worse, is that you can change the host filesystem via processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability which can have security implications, including impacting non-Docker processes on the host system.
-
tmpfs mounts
are stored in the host system’s memory only, and are never written to the host system’s filesystem.A
tmpfs mount
is not persisted on disk, either on the Docker host or within a container. It can be used by a container during the lifetime of the container, to store non-persistent state or sensitive information. For instance, internally, swarm services use tmpfs mounts to mount secrets into a service’s containers.
Volumes are the preferred way to persist data in Docker containers and services. Some use cases for volumes include:
-
Sharing data among multiple running containers. If you don’t explicitly create it, a volume is created the first time it is mounted into a container. When that container stops or is removed, the volume still exists. Multiple containers can mount the same volume simultaneously, either read-write or read-only. Volumes are only removed when you explicitly remove them.
-
When the Docker host is not guaranteed to have a given directory or file structure. Volumes help you decouple the configuration of the Docker host from the container runtime.
-
When you want to store your container’s data on a remote host or a cloud provider, rather than locally.
-
When you need to be able to back up, restore, or migrate data from one Docker host to another, volumes are a better choice. You can stop containers using the volume, then back up the volume’s directory (such as /var/lib/docker/volumes/).
In general, you should use volumes where possible. Bind mounts are appropriate for the following types of use case:
-
Sharing configuration files from the host machine to containers. This is how Docker provides DNS resolution to containers by default, by mounting
/etc/resolv.conf
from the host machine into each container. -
Sharing source code or build artifacts between a development environment on the Docker host and a container. For instance, you may mount a
Maven target/
directory into a container, and each time you build the Maven project on the Docker host, the container gets access to the rebuilt artifacts.
If you use Docker for development this way, your production Dockerfile would copy the production-ready artifacts directly into the image, rather than relying on a bind mount.
- When the file or directory structure of the Docker host is guaranteed to be consistent with the bind mounts the containers require.
tmpfs mounts
are best used for cases when you do not want the data to persist either on the host machine or within the container. This may be for security reasons or to protect the performance of the container when your application needs to write a large volume of non-persistent state data.
Volumes
are the preferred mechanism for persisting data generated by and used by Docker containers. While bind mounts
are dependent on the directory structure of the host machine, volumes are completely managed by Docker. Volumes have several advantages over bind mounts:
- Volumes are easier to back up or migrate than bind mounts.
- You can manage volumes using Docker CLI commands or the Docker API.
- Volumes work on both Linux and Windows containers.
- Volumes can be more safely shared among multiple containers.
- Volume drivers allow you to store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
- A new volume’s contents can be pre-populated by a container.
In addition, volumes are often a better choice than persisting data in a container’s writable layer, because using a volume does not increase the size of containers using it, and the volume’s contents exist outside the lifecycle of a given container.
If your container generates non-persistent state data, consider using a tmpfs mount to avoid storing the data anywhere permanently, and to increase the container’s performance by avoiding writing into the container’s writable layer.
Prior to Docker 17.06 version, the -v
or --volume
flag was used for standalone containers and the --mount
flag was used for swarm services. Starting with Docker 17.06 we can use --mount
with standalone containers.
Differences between -v (--volumen)
and --mount
:
-
-v
or--volume
: it combines all the options together in one field.- Consists of three fields, separated by colon characters (
:
):- The first field is the name of the volume, and is unique on a given host machine.
- The second field is the path where the file or directory will be mounted in the container.
- The third field is optional, and is comma-separated list of options.
- Consists of three fields, separated by colon characters (
-
--mount
: is more explicit and verbose.--mount
syntax seperates all the options.-
Consists of multiple key-value pairs, separated by commas and each consisting of a = tuple. The
--mount syntax
is more verbose than-v
or--volume
, but the order of the keys is not significant, and the value of the flag is easier to understand. -
The type of the mount, which can be bind, volume, or tmpfs. This topic discusses volumes, so the type will always be volume.
-
The source of the mount. For named volumes, this is the name of the volume. For anonymous volumes, this field is omitted. May be specified as source or src.
-
The destination takes as its value the path where the file or directory will be mounted in the container. May be specified as destination, dst, or target.
-
The readonly option, if present, causes the bind mount to be mounted into the container as read-only.
-
The volume-opt option, which can be specified more than once, takes a key-value pair consisting of the option name and its value.
-
-
Tip: New users should use the
--mount
syntax; it is easier to use.
Unlike a bind mount
, you can create and manage volumes outside the scope of any container:
-
Create a volume:
$ docker volume create my-vol
-
List volumes:
$ docker volume ls local my-vol
-
Inspect a volume:
$ docker volume inspect my-vol [ { "Driver": "local", "Labels": {}, "Mountpoint": "/var/lib/docker/volumes/my-vol/_data", "Name": "my-vol", "Options": {}, "Scope": "local" } ]
-
Remove a volume:
$ docker volume rm my-vol
-
Start a container with a volume:
If you start a container with a volume that does not yet exist, Docker creates the volume for you. The following example mounts the volume
myvol2
into/app/
in the container.$ docker run -d \ -it \ --name devtest \ --mount source=myvol2,target=/app \ nginx:latest
Use
docker inspect devtest
to verify that the volume was created and mounted correctly. Look for the Mounts section:$ docker inspect devtest "Mounts": [ { "Type": "volume", "Name": "myvol2", "Source": "/var/lib/docker/volumes/myvol2/_data", "Destination": "/app", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ],
Hasta ahora vimos cómo ejecutar un contenedor tanto en foreground como en background (detached). Ahora veremos cómo manejar el ciclo completo de vida de un contenedor. Docker provee de comandos como create
, start
, stop
, kill
, y rm
. En todos ellos podría pasarse el argumento "-h" para ver las opciones disponibles.
Ejemplo:
$ docker create -h
Más arriba vimos cómo correr un contenedor en segundo plano (detached). Ahora veremos en el mismo ejemplo, pero con el comando create
. La única diferencia es que ésta vez no especificaremos la opción "-d". Una vez preparado, necesitaremos lanzar el contenedor con docker start
.
Ejemplo:
$ docker create -P --expose=8001 python:2.7 python -m SimpleHTTPServer 8001
a842945e2414132011ae704b0c4a4184acc4016d199dfd4e7181c9b89092de13
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED ... NAMES
a842945e2414 python:2.7 "python -m SimpleHTT 8 seconds ago ... fervent_hodgkin
$ docker start a842945e2414
a842945e2414
$ docker ps
CONTAINER ID IMAGE COMMAND ... NAMES
a842945e2414 python:2.7 "python -m SimpleHTT ... fervent_hodgkin
Siguiendo el ejemplo, para detener el contenedor se puede ejecutar cualquiera de los siguientes comandos kill
ó stop
:
$ docker kill a842945e2414 (envía SIGKILL)
$ docker stop a842945e2414 (envía SIGTERM).
Asimismo, pueden reiniciarse (hacer un docker stop a842945e2414
y luego un docker start a842945e2414
):
$ docker restart a842945e2414
o destruirse:
$ docker rm a842945e2414
-
Problema:
Ejecutar contenedores en modo interactivo (-ti), hacer algunos cambios y para luego hacer un "commit" de estos en una nueva imagen, funciona bien. Pero en la mayoría de los casos, tal vez quieras automatizar este proceso de creación de nuestra propia imagen y compartir estos pasos con otros.
-
Solución:
Para automatizar el proceso de creación de imágenes Docker, crearemos los ficheros Dockerfile. Este archivo de texto está compuesto por:
- Una serie de instrucciones que describen cuál es la imagen base en la que está basado el nuevo contenedor.
- Los pasos/instrucciones que necesitan llevarse a cabo para instalar las dependencias de la aplicación.
- Archivos que necesitan estar presentes en la imagen.
- Los puertos serán expuestos por el contenedor.
- El/los comando(s) a ejecutar cuando se ejecuta el contenedor.
En primer lugar crearemos un directorio vacio y entraremos a dicho directorio
$ mkdir pruebadockerfile
$ cd pruebadockerfile/
Una vez dentro crearemos un fichero llamado "dockerfile" y le copiaremos el siguiente código:
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
Dentro del "dockerfile" se hace referencia a un par de fichero que no hemos creado aun: app.py y requirements.txt.
Crearemos ambos ficheros dentro del mismo directorio donde se encuentra el fichero "dockerfile":
requirements.txt
Flask
Redis
app.py
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
@app.route("/")
def hello():
try:
visits = redis.incr("counter")
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Como vemos, en el fichero "requirements.txt" se especifican los paquetes de python Flask y Redis que se van a instalar.
Ya estamos listos para hacer el "build" de la aplicación. Nos aseguraremos de estar en el mismo directorio donde están los fichero "dockerfile", "app.py" y "requirements.txt":
$ ls
Dockerfile app.py requirements.txt
Y a continuación realizamos el "build". Esto nos creará un "Docker image" que "tagearemos" con "-t" para que tenga un "friendly name":
$ docker build -t friendlyhello .
Para ver que se ha creado correctamente la imagen haremos lo siguiente:
$ docker images (o sino docker image ls)
REPOSITORY TAG IMAGE ID
friendlyhello latest 326387cea398
Arrancaremos la aplicación mapeando el puerto 4000 de nuestro host al puerto 80 del container mediante el parámetro "-p":
$ docker run -p 4000:80 friendlyhello
Si todo ha ido bien deberíamos ver como se ha cargado un servidor Web Flask de Python sirviendo en: https://0.0.0.0:80
. Dicho mensaje lo indica el servidor Web que está corriendo dentro del container, sin embargo como hemos mapeado el puerto 4000 con el puerto 80, abriremos el navegador y accederemos mediante: https://localhost:4000
.
Si queremos que el container funcione en background (detached mode) haremos lo siguiente:
$ docker run -d -p 4000:80 friendlyhello
Hemos utilizado la opción "-d" para arrancarlo en "detached mode".
Cuando estamos diseñando una "aplicación distribuida", a cada una de las piezas se le conoce como "service". Por ejemplo, si pensamos en una aplicación de "Video Sharing site", tendremos que tener por un lado un servicio que nos permita almacenar en una base de datos todo el contenido multimedia, por otra parte tendremos un servicio para realizar el "transcoding" en background cada vez que un usuario suba un vídeo, tambien tendremos un servicio para la parte front-end, etc.
Llamamos "services" a los "containers" que pongamos en producción. Un servicio se compone de una sola imagen, con todo lo necesario para que ésta proporcione la funciones para lo que ha sido creada. En Docker, la manera en que definiremos dichas "images" es con "Docker Compose", escribiendo lo que se conocen como ficheros docker-compose.yml.
Para trabajar con Compose seguiremos los siguietes pasos:
-
Define your app’s environment with a
Dockerfile
so it can be reproduced anywhere. -
Define the services that make up your app in
docker-compose.yml
so they can be run together in an isolated environment. -
Lastly, run
docker-compose up
and Compose will start and run your entire app.
Compose tiene comandos para poder gestionar el ciclo de vida completo de nuestra aplicación.
- Start, stop, and rebuild services
- View the status of running services
- Stream the log output of running services
- Run a one-off command on a service
Referencia:
Cuando queramos "linkar" dos o más contenedores tendremos que establecer su relación en un fichero YAML. A continuación se muestra un ejemplo de un fichero que "linka" un container "Web" (Wordpress) y uno de base de datos "MySQL":
-
Contenido del fichero docker-compose.yml:
version: '3' services: db: image: mysql:5.7 volumes: - db_data:/var/lib/mysql restart: always environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress volumes: db_data:
Y para ejecutarlo, estando en el mismo directorio donde está el fichero docker-compose.yml, haremos lo siguiente:
$ docker-compose up
Y para comprobar que todo ha ido bien, abriremos la url https://localhost:8000 para acceder a la página de Wordpress.
Para parar tenemos dos opciones:
-
docker-compose down
borra los contenedores, la red por defecto pero mantiene la base de datos de Wordpress. -
docker-compose down --volumes
borra los contenedores, la red por defecto y las bases de datos.
-
Referencias:
Conceptually, docker-compose
and docker stack
files serve the same purpose - deployment and configuration of your containers on docker engines.
-
Docker-compose tool was created first and its purpose is "for defining and running multi-container Docker applications" on a single docker engine.
-
Docker Stack is used in Docker Swarm (Docker's orchestration and scheduling tool) and, therefore, it has additional configuration parameters (i.e. replicas, deploy, roles) that are not needed on a single docker engine. This command can be invoked from a docker swarm manager only. Stacks are very similar to docker-compose except they define services while docker-compose defines containers. Stacks allow us to tell the docker engine the definition of the services that should be running, so the engine can monitor and orchestrate the services.
Referencias:
When you install Docker, it creates three networks automatically: bridge
, none
and host
. You can list these networks using the docker network ls
command:
$ docker network ls
NETWORK ID NAME DRIVER
7fca4eb8c647 bridge bridge
9f904ee27bf5 none null
cf03ee007fb4 host host
When you run a container, you can use the --network
flag to specify which networks your container should connect to.
-
Bridge: The bridge network represents the
docker0
network present in all Docker installations. Unless you specify otherwise with thedocker run --network=<NETWORK>
option, the Docker daemon connects containers to this network by default.We can see this bridge as part of a host’s network stack by using the
ip addr show
command:$ ip addr show docker0 Link encap:Ethernet HWaddr 02:42:47:bc:3a:eb inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1100 (1.1 KB) TX bytes:648 (648.0 B)
-
None: The
none
network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this:$ docker attach nonenetcontainer root@0cb243cd1293:/# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters root@0cb243cd1293:/# ip -4 addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever root@0cb243cd1293:/#
- Note: You can detach from the container and leave it running with
CTRL-p CTRL-q
.
- Note: You can detach from the container and leave it running with
-
Host: The
host
network adds a container on the host’s network stack. As far as the network is concerned, there is no isolation between the host machine and the container. For instance, if you run a container that runs a web server on port 80 using host networking, the web server is available on port 80 of the host machine.The
none
andhost
networks are not directly configurable in Docker. However, you can configure the defaultbridge
network, as well as your own user-defined bridge networks.
Docker permite realizar "containers networking" gracias al uso de sus network drivers. Por defecto, Docker proporciona dos drivers: bridge
y overlay
.
Toda instalacion de Docker Engine automáticamente incluye las siguientes tres redes:
$ docker network ls
NETWORK ID NAME DRIVER
18a2866682b8 none null
c288470c46f6 host host
7b369448dccb bridge bridge
- Bridge: Es una red especial. A menos que especifiquemos lo contrario, Docker siempre arrancará los containers en ésta red. Podemos probar a hacer lo siguiente:
$ docker run -itd --name=networktest ubuntu
74695c9cea6d9810718fddadc01a727a5dd3ce6a69d09752239736c030599741
Para comprobar la IP del container, haremos lo siguiente:
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {
"3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
"Name": "networktest",
"EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
},
"Labels": {}
}
]
Para desconectar un container de una red tendremos que indicar la red en la que está conectada así como el nombre del container:
$ docker network disconnect bridge networktest
Importante: Networks are natural ways to isolate containers from other containers or other networks.
Como ya hemos comentado, Docker Engine soporta dos tipos de redes: bridge y overlay:
- Bridge: se limita a un "single host" donde esté funcionando "Docker Engine".
- Overlay: puede incluir múltimples hosts con "Docker Engine" instalado.
A continuación crearemos un "bridge network":
$ docker network create -d bridge my_bridge
El flag "-d" indica a Docker que tiene que cargar el driver de red "bridge". Es opcional ya que Docker por defecto carga "bridge".
Si volvemos a listar los drivers de red, veremos el que acabamos de crear:
$ docker network ls
NETWORK ID NAME DRIVER
7b369448dccb bridge bridge
615d565d498c my_bridge bridge
18a2866682b8 none null
c288470c46f6 host host
Y si hacemos un "inspect" de la red veremos que no tiene ninguna información:
$ docker network inspect my_bridge
[
{
"Name": "my_bridge",
"Id": "5a8afc6364bccb199540e133e63adb76a557906dd9ff82b94183fc48c40857ac",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Containers": {},
"Options": {},
"Labels": {}
}
]
Cuando construyamos aplicaciones Web que deban funcionar de manera conjunta, por seguridad, crearemos una red. Las redes, por definicion, proporcionan una aislamiento completo a los containers. Cuando vayamos a arrancar un container podremos agregarlo a una red.
En el siguiente ejemplo arrancaremos un container de base de datos PostgreSQL pasándole el flag "--net=my_bridge":
$ docker run -d --net=my_bridge --name db training/postgres
Si ahora realizamos un "inspect" de la red "my_bridge" veremos que tiene un container asociado. También podemos inspeccionar el container para ver a qué red está conectado:
$ docker inspect --format='{{json .NetworkSettings.Networks}}' db
{"my_bridge":{"NetworkID":"7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99",
"EndpointID":"508b170d56b2ac9e4ef86694b0a76a22dd3df1983404f7321da5649645bf7043","Gateway":"10.0.0.1","IPAddress":"10.0.0.254","IPPrefixLen":24,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02"}}
Si ahora arrancamos la imagen Web veremos que la red está de la siguiente manera:
$ docker run -d --name web training/webapp python app.py
- Borrar container:
$ docker rm <container_ID>
- Borrar container y sus volumenes asociados:
$ docker rm -v <container_ID>
- Borrar TODOS los containers:
$ docker rm $(docker ps -a -q)
- Borrar imágenes:
$ docker rmi <container_ID>
- Borrar TODAS las imáganes:
$ docker rmi $(docker images -q)
- Listar imágenes "colgadas":
$ docker images -f "dangling=true"
- Borrar imágenes "colgadas":
$ docker rmi $(docker images -f "dangling=true" -q)
- Borrar TODOS los "volume" que no se estén utilizando:
$ docker volume rm $(docker volumels -q)
A continuación se muestra un listado con los comandos básicos de Docker:
docker build -t friendlyname . # Create image using this directory's Dockerfile
docker run -p 4000:80 friendlyname # Run "friendlyname" mapping port 4000 to 80
docker run -d -p 4000:80 friendlyname # Same thing, but in detached mode
docker container ls # List all running containers
docker container ls -a # List all containers, even those not running
docker container stop <hash> # Gracefully stop the specified container
docker container kill <hash> # Force shutdown of the specified container
docker container rm <hash> # Remove specified container from this machine
docker container rm $(docker container ls -a -q) # Remove all containers
docker image ls -a # List all images on this machine
docker image rm <image id> # Remove specified image from this machine
docker image rm $(docker image ls -a -q) # Remove all images from this machine
docker login # Log in this CLI session using your Docker credentials
docker tag <image> username/repository:tag # Tag <image> for upload to registry
docker push username/repository:tag # Upload tagged image to registry
docker run username/repository:tag # Run image from a registry
The ADD command gets two arguments: a source and a destination. It basically copies the files from the source on the host into the container's own filesystem at the set destination. If, however, the source is a URL (e.g. https://github.com/user/file/), then the contents of the URL are downloaded and placed at the destination.
Example:
# Usage: ADD [source directory or URL] [destination directory]
ADD /my_app_folder /my_app_folder
The command CMD
, similarly to RUN
, can be used for executing a specific command. However, unlike RUN
it is not executed during build, but when a container is instantiated using the image being built. Therefore, it should be considered as an initial, default command that gets executed (i.e. run) with the creation of containers based on the image.
To clarify: an example for CMD
would be running an application upon creation of a container which is already installed using RUN (e.g. RUN apt-get install …) inside the image. This default application execution command that is set with CMD becomes the default and replaces any command which is passed during the creation.
Example:
# Usage 1: CMD application "argument", "argument", ..
CMD "echo" "Hello docker!"
ENTRYPOINT
argument sets the concrete default application that is used every time a container is created using the image. For example, if you have installed a specific application inside an image and you will use this image to only run that application, you can state it with ENTRYPOINT and whenever a container is created from that image, your application will be the target.
If you couple ENTRYPOINT
with CMD
, you can remove "application" from CMD and just leave "arguments" which will be passed to the ENTRYPOINT.
-
Example:
# Usage: ENTRYPOINT application "argument", "argument", .. # Remember: arguments are optional. They can be provided by CMD # or during the creation of a container. ENTRYPOINT echo # Usage example with CMD: # Arguments set with CMD can be overridden during *run* CMD "Hello docker!" ENTRYPOINT echo
The ENV
command is used to set the environment variables (one or more). These variables consist of “key = value” pairs which can be accessed within the container by scripts and applications alike. This functionality of docker offers an enormous amount of flexibility for running programs.
-
Example:
# Usage: ENV key value ENV SERVER_WORKS 4
The EXPOSE
command is used to associate a specified port to enable networking between the running process inside the container and the outside world (i.e. the host).
-
Example:
# Usage: EXPOSE [port] EXPOSE 8080
FROM
directive is probably the most crucial command amongst all others for Dockerfiles. It defines the base image to use to start the build process. It can be any image, including the ones you have created previously. If a FROM
image is not found on the host, docker will try to find it (and download) from the docker image index. It needs to be the first command declared inside a Dockerfile.
-
Example:
# Usage: FROM [image name] FROM ubuntu
One of the commands that can be set anywhere in the file - although it would be better if it was declared on top - is MAINTAINER
. This non-executing command declares the author, hence setting the author field of the images. It should come nonetheless after FROM
.
-
Example:
# Usage: MAINTAINER [name] MAINTAINER authors_name
The RUN
command is the central executing directive for Dockerfiles. It takes a command as its argument and runs it to form the image. Unlike CMD
, it actually is used to build the image (forming another layer on top of the previous one which is committed).
-
Example:
# Usage: RUN [command] RUN aptitude install -y riak
The USER
directive is used to set the UID (or username) which is to run the container based on the image being built.
-
Example:
# Usage: USER [UID] USER 751
The VOLUME
command is used to enable access from your container to a directory on the host machine (i.e. mounting it).
-
Example:
# Usage: VOLUME ["/dir_1", "/dir_2" ..] VOLUME ["/my_files"]
The WORKDIR
directive is used to set where the command defined with CMD is to be executed.
-
Example:
# Usage: WORKDIR /path WORKDIR ~/
I will create a Dockerfile document and populate it step-by-step with the end result of having a Dockerfile, which can be used to create a docker image to run MongoDB containers.
Using the nano text editor, let's start editing our Dockerfile:
$ sudo nano Dockerfile
Although optional, it is always a good practice to let yourself and everybody figure out (when necessary) what this file is and what it is intended to do.
For this, we will begin our Dockerfile with fancy comments (i#) to describe it.
############################################################
# Dockerfile to build MongoDB container images
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER Example McAuthor
Note: This step is not necessary, given that we are not using the repository right afterwards. However, it can be considered good practice.
# Update the repository sources list
RUN apt-get update
################## BEGIN INSTALLATION ######################
# Install MongoDB Following the Instructions at MongoDB Docs
# Ref: https://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
# Add the package verification key
RUN apt-key adv --keyserver hkp:https://keyserver.ubuntu.com:80 --recv 7F0CEB10
# Add MongoDB to the repository sources list
RUN echo 'deb https://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list
# Update the repository sources list once more
RUN apt-get update
# Install MongoDB package (.deb)
RUN apt-get install -y mongodb-10gen
# Create the default data directory
RUN mkdir -p /data/db
##################### INSTALLATION END #####################
# Expose the default port
EXPOSE 27017
# Default port to execute the entrypoint (MongoDB)
CMD ["--port 27017"]
# Set default container command
ENTRYPOINT usr/bin/mongod
After you have appended everything to the file, it is time to save and exit. Press CTRL+X
and then "Y" to confirm and save the Dockerfile.
############################################################
# Dockerfile to build MongoDB container images
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER Example McAuthor
# Update the repository sources list
RUN apt-get update
################## BEGIN INSTALLATION ######################
# Install MongoDB Following the Instructions at MongoDB Docs
# Ref: https://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
# Add the package verification key
RUN apt-key adv --keyserver hkp:https://keyserver.ubuntu.com:80 --recv 7F0CEB10
# Add MongoDB to the repository sources list
RUN echo 'deb https://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list
# Update the repository sources list once more
RUN apt-get update
# Install MongoDB package (.deb)
RUN apt-get install -y mongodb-10gen
# Create the default data directory
RUN mkdir -p /data/db
##################### INSTALLATION END #####################
# Expose the default port
EXPOSE 27017
# Default port to execute the entrypoint (MongoDB)
CMD ["--port 27017"]
# Set default container command
ENTRYPOINT usr/bin/mongod
Using the explanations from before, we are ready to create our first MongoDB image with docker!
$ sudo docker build -t my_mongodb .
- Note: The
-t [name]
flag here is used to tag the image. To learn more about what else you can do during build, run sudodocker build --help
.
Using the image we have build, we can now proceed to the final step: creating a container running a MongoDB instance inside, using a name of our choice (if desired with -name [name]
).
sudo docker run -name my_first_mdb_instance -i -t my_mongodb
-
Note: If a name is not set, we will need to deal with complex, alphanumeric IDs which can be obtained by listing all the containers using sudo
docker ps -l
. -
Note: To detach yourself from the container, use the escape sequence
CTRL+P
followed byCTRL+Q
.