Skip to content
/ docker Public
forked from medallia/docker

Hacks and modifications of the open-source application container engine.

License

Notifications You must be signed in to change notification settings

mauri/docker

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Docker L

Medallia Updates

Features added on top of stock docker:

Routed network driver

Provides a transparent way to assign multiple IP addresses to a container. It's based on using standard routing protocols to share the information of where each container is running across a cluster. Thus not needing to have a distributed storage and separate processes as the source of truth. Currently using the Quagga OSPF implementation.

The regular veth pair creation is then replaced for the following sequence of events.

  • Creates a pair of veth.
  • Moves one to the container namespace.
  • Renames the container veth to eth0.
  • Adds route to 0.0.0.0/0 via eth0 in container.
  • Sets the requested IP addresses to the container eth0.
  • Adds route to container IP via veth0 in the host.

Then the route to reach the container addresses is automatically propagated by the enabled routing protocol. In essence, each host in the cluster acts as a router.

The configuration is quite simple, for example the following ospfd.conf file of Quagga allows to route the containers in the networks 10.112.0.0, 192.168.0.0 and 10.255.255.0 using the host eth1 interface. Any container with IP addresses in those networks, regardless the host where they are running, will be able to talk to each other.

! Bootstrap Config
router ospf
 ospf router-id 10.112.11.6
 redistribute kernel
 passive-interface default
 no passive-interface eth1
 network 10.112.0.0/12 area 0.0.0.0
 network 192.168.0.0/16 area 0.0.0.0
 network 10.255.255.0/24 area 0.0.0.0
!
log syslog
!
interface eth1
!ip ospf network point-to-point
!

To launch a container using the routed mode, you need to specify it and add a label containing the list of IP addresses you want to assign to a particular container.

docker run --it --net=routed --label io.docker.network.endpoint.ip4addresses="192.168.13.1,10.112.20.2" ubuntu

Also an 'ip-address' option is available to supply the addresses

docker run --it --net=routed --ip-address="192.168.13.1,10.112.20.2" ubuntu

IP tables integration

Works with the routed driver. Allows to specify what IPs are allowed to connect to the container.

You specify it via the container label "io.docker.network.endpoint.ingressAllowed". For example:

docker run -it --net=routed --ip-address=192.168.13.13 --label io.docker.network.endpoint.ingressAllowed="1.1.1.1/24,2.2.2.2" ubuntu /bin/bash

The parameter accepts a comma separated list of values, which can be:

  • Single IP
  • IP Net (CIDR)
  • IP Range (IP-IP)

The host machine is expected to have the following IP Chains for this feature to work. (DCIB adds them in DCs)

  • CONTAINERS: Where references to container specific chains are added. This is supposed to be referenced from the FORWARD chain.
  • CONTAINER-REJECT: What the container specific chain jumps to in case of rejection.

For local development, you can execute these commands:

sudo iptables -N CONTAINERS
sudo iptables -A CONTAINERS -j RETURN

sudo iptables -N CONTAINER-REJECT
sudo iptables -A CONTAINER-REJECT -p tcp -j REJECT --reject-with tcp-reset
sudo iptables -A CONTAINER-REJECT -j REJECT

sudo iptables -I FORWARD 1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
sudo iptables -I FORWARD 2 -p icmp -j ACCEPT
sudo iptables -I FORWARD 3 -m state --state INVALID -j DROP
sudo iptables -I FORWARD 4 -j CONTAINERS

If the label is not specified, there is no ingress restriction enforced.

LibNetwork updates

Instead of working directly in the vendor folder for libnetwork updates, you should clone the medallia libnetwork fork, work on the updates and then update the docker repo with those changes (vendoring), running the hack/vendor-libnetwork-medallia.sh script (after you updated it with the correct changeset hash).

To run unit tests on libnetwork repo:

make build
docker run --privileged --rm -ti -w /go/src/github.com/docker/libnetwork -v `pwd`:/go/src/github.com/docker/libnetwork libnetworkbuild:latest /bin/bash
INSIDECONTAINER=-incontainer=true godep go test -test.parallel 3 -test.v -run TestParseIPRange

Auto volume mount (NFS/Ceph)

docker run -v 10.112.12.13//foo:/foo:nfs,rw ubuntu 
docker run -v ceph-volume-foo:/foo:ceph,rw ubuntu

Build

DOCKER_BUILD_PKGS=ubuntu-xenial make deb

Docker: the container engine Release

Docker is an open source project to pack, ship and run any application as a lightweight container.

Docker containers are both hardware-agnostic and platform-agnostic. This means they can run anywhere, from your laptop to the largest cloud compute instance and everything in between - and they don't require you to use a particular language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases, and backend services without depending on a particular stack or provider.

Docker began as an open-source implementation of the deployment engine which powered dotCloud, a popular Platform-as-a-Service. It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands of applications and databases.

Docker logo

Security Disclosure

Security is very important to us. If you have any issue regarding security, please disclose the information responsibly by sending an email to [email protected] and not by creating a github issue.

Better than VMs

A common method for distributing applications and sandboxing their execution is to use virtual machines, or VMs. Typical VM formats are VMware's vmdk, Oracle VirtualBox's vdi, and Amazon EC2's ami. In theory these formats should allow every developer to automatically package their application into a "machine" for easy distribution and deployment. In practice, that almost never happens, for a few reasons:

  • Size: VMs are very large which makes them impractical to store and transfer.
  • Performance: running VMs consumes significant CPU and memory, which makes them impractical in many scenarios, for example local development of multi-tier applications, and large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
  • Portability: competing VM environments don't play well with each other. Although conversion tools do exist, they are limited and add even more overhead.
  • Hardware-centric: VMs were designed with machine operators in mind, not software developers. As a result, they offer very limited tooling for what developers need most: building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.

By contrast, Docker relies on a different sandboxing method known as containerization. Unlike traditional virtualization, containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with openvz, vserver and more recently lxc, Solaris with zones, and FreeBSD with Jails.

Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves all four problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead, they are completely portable, and are designed from the ground up with an application-centric design.

Perhaps best of all, because Docker operates at the OS level, it can still be run inside a VM!

Plays well with others

Docker does not require you to buy into a particular programming language, framework, packaging system, or configuration language.

Is your application a Unix process? Does it use files, tcp connections, environment variables, standard Unix streams and command-line arguments as inputs and outputs? Then Docker can run it.

Can your application's build be expressed as a sequence of such commands? Then Docker can build it.

Escape dependency hell

A common problem for developers is the difficulty of managing all their application's dependencies in a simple and automated way.

This is usually difficult for several reasons:

  • Cross-platform dependencies. Modern applications often depend on a combination of system libraries and binaries, language-specific packages, framework-specific modules, internal components developed for another project, etc. These dependencies live in different "worlds" and require different tools - these tools typically don't work well with each other, requiring awkward custom integrations.

  • Conflicting dependencies. Different applications may depend on different versions of the same dependency. Packaging tools handle these situations with various degrees of ease - but they all handle them in different and incompatible ways, which again forces the developer to do extra work.

  • Custom dependencies. A developer may need to prepare a custom version of their application's dependency. Some packaging systems can handle custom versions of a dependency, others can't - and all of them handle it differently.

Docker solves the problem of dependency hell by giving the developer a simple way to express all their application's dependencies in one place, while streamlining the process of assembling them. If this makes you think of XKCD 927, don't worry. Docker doesn't replace your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers.

Docker defines a build as running a sequence of Unix commands, one after the other, in the same container. Build commands modify the contents of the container (usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous commands, the order in which the commands are executed expresses dependencies.

Here's a typical Docker build process:

FROM ubuntu:12.04
RUN apt-get update && apt-get install -y python python-pip curl
RUN curl -sSL https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
RUN cd helloflask-master && pip install -r requirements.txt

Note that Docker doesn't care how dependencies are built - as long as they can be built by running a Unix command in a container.

Getting started

Docker can be installed either on your computer for building applications or on servers for running them. To get started, check out the installation instructions in the documentation.

Usage examples

Docker can be used to run short-lived commands, long-running daemons (app servers, databases, etc.), interactive shell sessions, etc.

You can find a list of real-world examples in the documentation.

Under the hood

Under the hood, Docker is built on the following components:

Contributing to Docker GoDoc

Master (Linux) Experimental (linux) Windows FreeBSD
Jenkins Build Status Jenkins Build Status Build Status Build Status

Want to hack on Docker? Awesome! We have instructions to help you get started contributing code or documentation.

These instructions are probably not perfect, please let us know if anything feels wrong or incomplete. Better yet, submit a PR and improve them yourself.

Getting the development builds

Want to run Docker from a master build? You can download master builds at master.dockerproject.org. They are updated with each commit merged into the master branch.

Don't know how to use that super cool new feature in the master build? Check out the master docs at docs.master.dockerproject.org.

How the project is run

Docker is a very, very active project. If you want to learn more about how it is run, or want to get more involved, the best place to start is the project directory.

We are always open to suggestions on process improvements, and are always looking for more maintainers.

Talking to other Docker users and contributors

Internet Relay Chat (IRC)

IRC is a direct line to our most knowledgeable Docker users; we have both the #docker and #docker-dev group on irc.freenode.net. IRC is a rich chat protocol but it can overwhelm new users. You can search our chat archives.

Read our IRC quickstart guide for an easy way to get started.
Docker Community Forums The Docker Engine group is for users of the Docker Engine project.
Google Groups The docker-dev group is for contributors and other people contributing to the Docker project. You can join this group without a Google account by sending an email to [email protected]. You'll receive a join-request message; simply reply to the message to confirm your subscribtion.
Twitter You can follow Docker's Twitter feed to get updates on our products. You can also tweet us questions or just share blogs or stories.
Stack Overflow Stack Overflow has over 7000 Docker questions listed. We regularly monitor Docker questions and so do many other knowledgeable Docker users.

Legal

Brought to you courtesy of our legal counsel. For more context, please see the NOTICE document in this repo.

Use and transfer of Docker may be subject to certain restrictions by the United States and other governments.

It is your responsibility to ensure that your use and/or transfer does not violate applicable laws.

For more information, please see https://www.bis.doc.gov

Licensing

Docker is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Other Docker Related Projects

There are a number of projects under development that are based on Docker's core technology. These projects expand the tooling built around the Docker platform to broaden its application and utility.

  • Docker Registry: Registry server for Docker (hosting/delivery of repositories and images)
  • Docker Machine: Machine management for a container-centric world
  • Docker Swarm: A Docker-native clustering system
  • Docker Compose (formerly Fig): Define and run multi-container apps
  • Kitematic: The easiest way to use Docker on Mac and Windows

If you know of another project underway that should be listed here, please help us keep this list up-to-date by submitting a PR.

Awesome-Docker

You can find more projects, tools and articles related to Docker on the awesome-docker list. Add your project there.

About

Hacks and modifications of the open-source application container engine.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 92.8%
  • Shell 6.7%
  • Python 0.2%
  • Makefile 0.1%
  • PowerShell 0.1%
  • C 0.1%