Notes about Docker.
Containers are an isolated area of an OS with resource usage limits applied.
Two kernel low level constructs make containers possible: Control groups and Namespaces. Working with these constructs directly is hard if you are not a Kernel engineer. The Docker engine does its “magic” of creating containers using those constructs.
Namespaces are responsible for isolation. Each container has its own process tree, root file system, eth0 interface, root user, etc. It acts like an isolated OS but it shares the Kernel with the host (and other containers). Containers are not aware of each other.
Some of the Namespaces present on each container:
Control groups (
cgroups) are responsible for setting limits. They polish the consumption of system resources.
Problem: in a multi-container setup, one of the containers steals all the resources from the host OS, leaving only a few to the other containers.
The Docker Engine interfaces with the Kernel (namespaces and croups).
Before having the engine, Docker was built on top of LXC. LXC has stability problems so they had to switch to something more stable and that they could control, so
libcontainer was created. Later on more things like the registry, HTTP server, registry, compose, orchestration, etc. were added. This monoltith - called the deamon - became huge and bloated.
Docker started refactoring the daemon to make it more modular. By the same time the Open Container Initiative (OCI) started developing standards for containers.
Today: the docker daemon is mostly implementing the REST API. Under the hood it calls
containerd which calls
When you run
docker container run <params> on Linux…
- The docker client posts an API request (
POST /vX.X/containers/create HTTP/1.1) to the endpoint in the daemon;
- The daemon doesn’t have any code to execute or run containers (after the refactor) so it calls
containerdvia GRPC on a local Unix socket. (
- Despite the name,
containerdactually doesn’t create the containers. The logic to interact with namespaces and control groups is implemented by the OCI implementation (
runcon Linux by default);
runccreates the container, gives it back to
containerdand exits (short-lived process).
containerdsticks around managing the containers created by
runc(long-lived process, daemon).
Images are read-only templates for creating containers. They hold all the code and supporting files to run an application - OS files and objects, app files and a manifest. Images are sharable through a Registry.
You can think of images as a stopped container and containers as a running image.
Images are a collection of layers stacked. Example:
- (3) Updates
- (2) Redis and Application code
- (1) Ubuntu OS (Base layer)
The manifest JSON file describes things such as the ID, the tags, when it was created and a list of the layers that were stacked.
What happens in a
docker image pull ...?
- Calls the docker registry API (defaults to Docker Hub) to fetch the fat manifest;
- Fetch the image manifest based on your architecture;
- Fetch the layers present on the image manifest.
Fat manifest (a.k.a. Manifest list) : a list of manifests for each architecture. Image manifest: the actual manifest for the image but taking into consideration the architecture (ARM, etc.)
☁ ~ docker image pull redis Using default tag: latest latest: Pulling from library/redis 8559a31e96f4: Pull complete 85a6a5c53ff0: Pull complete b69876b7abed: Pull complete a72d84b9df6a: Pull complete 5ce7b314b19c: Pull complete 04c4bfb0b023: Pull complete Digest: sha256:800f2587bf3376cb01e6307afe599ddce9439deafbd4fb8562829da96085c9c5 Status: Downloaded newer image for redis:latest docker.io/library/redis:latest
Images are hosted on Registries. They are represented by
When pulling an image, docker defaults to the Docker Hub registry (https://docker.io) and the
latest version of the image. These two commands are equivalent:
docker pull image redis
docker pull image docker.io/redis:latest
- Use official images when possible;
- Be explicit when addressing image versions (do not use the