Docker & Container Fundamentals

What Docker Actually Is

A Kubernetes cluster is throwing "Docker is deprecated" warnings on every upgrade. A platform engineer asks their team: "Should we worry?" Someone says "Kubernetes is removing Docker support." Someone else says "Docker Inc. went bankrupt." Another person says "We have to migrate all our images." All three are wrong, in different ways. Kubernetes removed dockershim — an internal adapter it no longer needs — but continues to run containers built by Docker, on the exact same runtime (containerd) Docker itself has been using for years. Nothing about the images, the Dockerfiles, or the runtime behavior changed. Only one small piece of glue code.

That confusion is the entire reason this lesson exists. "Docker" has been many things across its history: a product, a company, a CLI, a daemon, a whole ecosystem. Today it is four distinct layers, and the "Docker removal" conversation only makes sense if you can name them. This lesson is the taxonomy: what runs under docker run, what Kubernetes actually talks to, and what the OCI specification means for anyone shipping containers.


"Docker" Is Four Things

When someone says "Docker," they could mean any of these:

  1. The CLI — the docker binary you type commands into.
  2. The daemon (dockerd) — a long-running process that manages images, containers, networks, and volumes. The CLI talks to it over a Unix socket.
  3. The runtime(s) — the lower-level components that actually start containers. This is a layered stack: containerd (high-level runtime) + runc (low-level runtime). Docker's daemon delegates to these.
  4. The company and ecosystem — Docker Inc., Docker Hub, Docker Desktop, and the marketing of "containers" as a platform.

Confusing these four is why "Kubernetes is removing Docker" was misinterpreted for months. Kubernetes removed one piece of plumbing between itself and the runtime; the images, the Dockerfiles, the registries, and the containers themselves kept working.


The Layered Architecture

docker CLIUser-facing command"I want nginx running"HTTP / Unix socketdockerd (daemon)High-level orchestration:• image pulls• builds (docker build)• networking (docker0 bridge)Also: volumes, logs,HTTP API surfacegRPCcontainerdMid-level runtime (CRI):• image distribution• pull / push / unpack• lifecycle (create, start, kill)Also used directlyby Kubernetes kubeletruncLow-level OCI runtime:• reads OCI runtime spec JSON• clone / pivot_root / setns / cgroupReplaceable: crun,runsc (gVisor),kata-runtime

Each layer has a single job and a clear boundary. Each is open-source, maintained separately, and replaceable.

Layer 1: The CLI

which docker
# /usr/bin/docker

docker version
# Client: Docker Engine - Community
#  Version: 25.0.3
#  ...
# Server: Docker Engine - Community
#  Engine: 25.0.3
#  ...

# The CLI is a thin HTTP client
strace -f -e trace=connect docker ps 2>&1 | grep -i sock | head
# connect(3, {sa_family=AF_UNIX, sun_path="/var/run/docker.sock"}, 23) = 0
# ← the CLI connected to the daemon's Unix socket

The CLI does not start containers. It sends a request to the daemon and formats the response.

Layer 2: The Daemon (dockerd)

# The daemon process
ps aux | grep dockerd | grep -v grep
# root  1234  ... /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

# Its socket
ls -l /var/run/docker.sock
# srw-rw---- 1 root docker 0 Apr 20 09:00 /var/run/docker.sock

# Its config
cat /etc/docker/daemon.json 2>/dev/null
# {"log-driver": "json-file", "storage-driver": "overlay2", ...}

dockerd is the heavy-lifting layer. It manages images, builds (with BuildKit), coordinates the docker0 bridge and NAT rules, hosts the HTTP API, and calls the next layer down (containerd) to actually run containers.

This is the daemon that has to be running for docker run to work. If you systemctl stop docker, the CLI fails immediately.

Layer 3: containerd

# containerd is a separate daemon that dockerd talks to
ps aux | grep containerd | grep -v grep
# root  1200 ... /usr/bin/containerd
# root  1250 ... containerd-shim-runc-v2 ...  (one shim per container)

ls /run/containerd/
# containerd.sock        ← containerd's own socket
# debug.sock

# Talk to containerd directly (bypass dockerd)
sudo ctr ns ls
# NAME       LABELS
# default
# moby       ← docker's containers live in this namespace
# k8s.io     ← kubernetes' containers live here if using containerd directly

sudo ctr -n moby containers ls
# CONTAINER   IMAGE   RUNTIME
# abc123...   ...     io.containerd.runc.v2

containerd is the Container Runtime Interface (CRI) implementation that Kubernetes uses directly. It exposes a rich API (image pulls, unpacks, container lifecycle, snapshotter for overlay layers, namespace separation) and is meant to be usable by multiple front-ends — Docker is one; Kubernetes' kubelet is another.

This is the piece that was always "doing the work" under Docker. When Kubernetes "deprecated Docker," it just meant "we will talk to containerd directly instead of going through dockerd → containerd, because dockershim adds nothing."

Layer 4: runc (and the OCI Runtime Spec)

which runc
# /usr/bin/runc

# runc is a tiny binary that does one thing:
# read an OCI runtime-spec JSON and execute the container
runc --help | head -20

# There is no runc daemon. runc starts, executes clone()+setns()+pivot_root(),
# and exits. The container keeps running under containerd-shim-runc-v2.

# See an OCI spec in action
sudo find /run/containerd -name config.json | head -1 | xargs sudo cat | jq .process
# {
#   "terminal": false,
#   "args": ["/docker-entrypoint.sh", "nginx", "-g", "daemon off;"],
#   "env": ["PATH=/usr/local/sbin:...","NGINX_VERSION=1.25.3"],
#   "cwd": "/",
#   ...
# }

runc reads a JSON spec (OCI runtime specification) that describes every detail of the container: binary to run, env vars, capabilities, namespaces to create, rootfs path, cgroup assignments, seccomp filter. It then invokes the kernel calls (clone, setns, pivot_root, cgroup attach) in the correct order and execves the target binary.

runc has no daemon. It is called, does its work, and exits. The running container is kept alive by containerd-shim-runc-v2, a tiny process per container whose sole job is to own the container's stdout/stderr and wait on its exit.


The OCI: The Standard That Makes It All Interoperable

The Open Container Initiative (OCI) is the standards body (hosted under the Linux Foundation) that publishes three specs:

  1. OCI Image Spec — what a container image is: a tarball of layers + a JSON manifest. docker pull, docker push, and docker build all produce/consume OCI images.
  2. OCI Runtime Spec — what a container bundle is: a rootfs directory + a config.json describing how to run it. runc (and alternatives) read this.
  3. OCI Distribution Spec — the HTTP API of a container registry. Docker Hub, GHCR, ECR, Harbor all implement this.

Because these are vendor-neutral specs, you can:

  • Build an image with docker build and run it with podman (or containerd + runc, or CRI-O, or…).
  • Push it to Docker Hub and GHCR and ECR without changing the image.
  • Swap runc for crun (C implementation, faster startup), runsc (gVisor, user-space kernel for stronger isolation), or kata-runtime (each container in a microVM).
KEY CONCEPT

OCI compliance is the reason "container" means the same thing across tools. An image built with buildah, pushed to a Harbor registry, and run on a Kubernetes cluster using containerd + crun will behave identically to the same image run with docker run. This decoupling is what makes the container ecosystem healthy — no single vendor owns the contract.


Alternative Stacks

Front-endDaemonMid-runtimeLow-runtimeTypical user
docker CLIdockerdcontainerdruncDevelopers, CI, local dev
podman CLInone (daemonless)conmonrunc / crunRootless workloads, RHEL
Kubernetes kubeletnonecontainerd or CRI-Orunc / crun / kata / gvisorProduction clusters
nerdctl CLInonecontainerdrunccontainerd users wanting a Docker-like CLI
finch / limalima VMcontainerdruncmacOS devs avoiding Docker Desktop licensing

Notice that the low-level runtime is almost always runc (or a runc-compatible binary) because the OCI runtime spec is the universal contract.

Kubernetes' "Docker deprecation," explained precisely

  • Kubernetes' kubelet needs to talk to a container runtime. Historically it used a shim layer called dockershim to translate between its CRI (Container Runtime Interface) and the Docker Engine API.
  • Since containerd implements CRI natively, dockershim had no reason to exist.
  • In Kubernetes 1.20+, dockershim was deprecated. In 1.24, it was removed.
  • Clusters continue to run containers built by Docker, pulled from Docker Hub, on the same containerd + runc stack. Nothing about images, the container runtime, or the container's behavior changed.
  • Docker Desktop, docker build, Docker Compose — all still work for local development. They were never the target of the deprecation.
WAR STORY

A team spent a weekend "migrating off Docker" after a nervous vendor memo. They rewrote CI, retrained engineers, and proudly announced they had "removed Docker from production." Six months later an audit found that every image they built was still an OCI image, every registry was still OCI-compliant, and their containerd + runc runtime was identical to what was under Docker before. The actual code change in production: zero. The lesson: understand the stack before reacting to vendor messaging. Knowing the four layers (CLI, daemon, containerd, runc) makes it obvious which layer a given change affects — and spares you from unnecessary migrations.


When "Docker" Matters and When It Does Not

Docker-specific things (a different runtime will not help):

  • The docker CLI ergonomics (docker run, docker exec, docker compose).
  • The docker build / BuildKit build system.
  • Docker Hub as a registry.
  • Docker Desktop on macOS/Windows (uses a Linux VM under the hood).

OCI-standard things (any OCI-compliant runtime works):

  • Container images (every image is an OCI image).
  • Container runtime behavior (namespaces, cgroups, process lifecycle).
  • Registry interop (every registry speaks OCI Distribution).
  • Dockerfile format (every modern builder supports Dockerfile syntax).

So: you can run Docker images without Docker. You can push to Docker Hub without Docker. You can use Dockerfiles without Docker. The word "Docker" mostly describes the tooling you are using to produce and consume OCI artifacts.


Checking What You're Actually Running

# Is Docker running on this host?
systemctl is-active docker 2>/dev/null
# active

# Check which runtime dockerd is using
docker info | grep -E 'Runtime|Storage|Kernel|Operating'
# Default Runtime: runc
# Storage Driver: overlay2
# Kernel Version: 6.5.0-generic
# Operating System: Ubuntu 22.04.3 LTS

# On a Kubernetes node
crictl version
# Version:  0.1.0
# RuntimeName:  containerd
# RuntimeVersion:  1.7.22
# RuntimeApiVersion:  v1

crictl ps
# (containers managed by the kubelet via containerd — Docker not involved)
PRO TIP

docker info is the single best "what is this machine running" snapshot. It tells you the default runtime (almost always runc), the storage driver (almost always overlay2), the kernel, the cgroup version, and the logging driver. When debugging a weird Docker behavior, paste the docker info output into the ticket first — most infra folks can diagnose from that alone.


How This Course Uses the Word "Docker"

For the rest of this course, we use Docker as shorthand for "the Docker CLI + daemon flow" — i.e., what you get when you apt install docker.io and type docker run. Most of what we cover works identically on podman, nerdctl, or direct ctr / crictl usage. Where behavior differs (rootless by default, socket paths, daemon model), we will call it out.

If you are on Kubernetes, every concept in Modules 2–6 applies — you are just looking at images, runtime behavior, and cgroups through the Kubernetes lens instead of the docker CLI. That is a strength of the OCI standardization, not a coincidence.


Key Concepts Summary

  • "Docker" is four layers. CLI, daemon (dockerd), mid-runtime (containerd), low-runtime (runc). Confusing them is the #1 source of misconceptions.
  • runc does the kernel work. The tiny binary that reads an OCI runtime spec JSON and invokes clone/setns/pivot_root/cgroup-attach.
  • containerd is the orchestrator of runtimes. Used directly by Kubernetes; also used by dockerd under the hood.
  • The OCI publishes three specs: image, runtime, distribution. These make the ecosystem interoperable.
  • Kubernetes' "Docker deprecation" removed dockershim. Images, runtime behavior, and registries were never affected.
  • runc is replaceable. crun (faster), runsc/gVisor (stronger isolation), kata-runtime (microVMs per container) all drop in.
  • Docker Desktop runs a Linux VM on macOS/Windows. It is a packaging convenience, not a different runtime model.
  • docker info is the snapshot. It reveals the runtime, storage driver, kernel, and cgroup version in one command.

Common Mistakes

  • Saying "Kubernetes removed Docker" and panicking. Kubernetes removed dockershim, not Docker images, not Dockerfiles, not the containerd runtime.
  • Treating Docker Hub, Docker CLI, and the Docker daemon as one thing. They are independent — you can use the CLI without Docker Hub and vice versa.
  • Assuming images built by docker build are "Docker format." They are OCI images. Any OCI-compliant runtime can run them.
  • Picking a non-OCI-compliant tool. Before 2016 this was a concern; today essentially every mainstream container tool is OCI-compliant.
  • Running two daemons that both want port/socket ownership (e.g. Docker + Podman + nerdctl all fighting for /var/run/docker.sock). Pick one.
  • Thinking you need Docker Desktop on Linux. On Linux you install docker.io directly and the daemon runs natively. Desktop is for macOS/Windows where a Linux VM is needed.
  • Forgetting that docker-compose (v1, Python) and docker compose (v2, Go plugin) are different binaries. The hyphen matters.
  • Debugging a "Docker bug" without checking which layer is at fault. Symptoms in the CLI often originate in runc, containerd, or the kernel.

KNOWLEDGE CHECK

You are on a Kubernetes node running containerd. You want to see what containers are running and pull a new image, but `docker` is not installed. Which of these will work?