Docker & Container Fundamentals

The docker run Deep Dive

A new engineer types docker run nginx and stares at the terminal. Nothing seems to happen — the prompt is gone, nginx is running in the foreground, and they cannot type anything. They hit Ctrl-C and nginx dies. They try again with docker run -d nginx and get a hex string back. They run docker ps and see the container. They try to curl localhost:80 and get connection refused. They ask: "But the container is running. Why can't I reach it?"

Every docker run flag is answering a specific question: detached or foreground? Remove after exit? Attach storage? Expose ports? Join which network? The defaults are usually wrong for what you actually want. This lesson is the reference for what every common flag does, what kernel feature it maps to, and the combinations you will use constantly in dev and production.


The Minimum

docker run <image>

That is the minimum. It:

  1. Pulls the image if not present.
  2. Creates a new container from it (allocates an overlay rootfs, new namespaces, cgroup).
  3. Attaches the calling terminal's stdin/stdout/stderr to the container's main process (the foreground default).
  4. Runs the image's ENTRYPOINT + CMD.
  5. Waits for the container to exit, then removes... nothing (unless you passed --rm).

On exit, the container is stopped but not removed. It still has its overlay rootfs and logs on disk. This is why docker ps -a might show dozens of exited containers you forgot about.

# All containers, running or stopped
docker ps -a

# Clean up stopped containers
docker container prune -f
KEY CONCEPT

docker run is CLI shorthand for "create a container, start it, and attach if foreground." You can do each step separately (docker create, docker start, docker attach) — and production tooling typically does. The all-in-one docker run is optimized for interactive use.


The Flag Taxonomy

Think of flags as answering categories of questions:

CategoryFlagsPurpose
Lifecycle-d, --rm, --restart=, --nameHow long does it run, what happens on exit
I/O-i, -t, -a, --detach-keysHow does my terminal relate to the container
Filesystem-v, --mount, --tmpfs, -wWhat does the container see on disk
Networking-p, --network, --hostname, --add-host, --dnsHow does the container talk to things
Config-e, --env-file, --labelWhat does the container know at startup
Resources--memory, --cpus, --pids-limit, --oom-kill-disableResource limits (cgroup values)
Identity-u, --user, --usernsWhat user runs inside
Security--cap-add, --cap-drop, --security-opt, --read-only, --privilegedWhat the container can do to the host
Override--entrypoint, CMD args appendedChange the image's default command

Let's walk through the ones you actually use.


Lifecycle Flags

-d (detached)

docker run -d nginx
# b3f2e87a5c4d9f1e2b3a4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f
# ↑ the full container ID

Runs in the background. You get a container ID back. The terminal is free. This is how you run services.

--rm

docker run --rm alpine echo "hello"
# hello
# (container exits, then is immediately deleted)

Remove the container automatically when it exits. Essential for one-shot commands, tests, and CI jobs — otherwise you accumulate stopped containers.

Do not use --rm with -d. Well, you can — but you lose the debug trail. If the container crashes with -d --rm, it is gone before you can check its logs.

--restart=

docker run -d --restart=unless-stopped --name web nginx

Tells the Docker daemon to restart the container if it exits unexpectedly. Values:

  • no (default) — never restart
  • on-failure[:max-retries] — only on non-zero exit; optionally cap retries
  • always — always restart; survives docker restart on the host
  • unless-stopped — like always but does not restart if explicitly stopped

For anything you want to keep running, use unless-stopped. For production workloads, you use Kubernetes / an orchestrator instead — they handle this better.

--name

docker run -d --name myapp nginx
docker logs myapp
docker exec myapp sh
docker rm -f myapp

Gives the container a friendly name so you do not have to remember the hex ID. Without --name, Docker picks a cute adjective-noun combo (optimistic_tesla, wizardly_swartz).

PRO TIP

Names must be unique on the host. If you run the same --name app twice, the second errors with "name already in use." For services you restart often, pair --rm --name app so the old one is cleaned up on exit. For long-running services, accept that restarting requires docker rm app && docker run ....


I/O Flags

-i (interactive) and -t (TTY)

docker run -it alpine sh
# / #   ← interactive shell prompt

Two separate flags often combined as -it:

  • -i keeps stdin open; without it, piping or typing does nothing.
  • -t allocates a pseudo-terminal so programs think they are on a terminal (colors, line editing, prompt redraws).

Use:

  • -it for interactive shells and REPLs.
  • -i only for piping stdin: docker run -i alpine cat < local.txt.
  • Nothing (default) for daemon-style services.
  • -t only (no -i) is rare — mostly when you want colored output but no input.

Attach vs detach at runtime

# Attach to a running container's stdout/stderr
docker attach myapp

# Detach without killing: Ctrl-P Ctrl-Q (the "detach keys")

# Execute a new command in an already-running container
docker exec -it myapp sh

docker exec spawns a new process in the container's namespaces — it does not connect to the container's main process. This is the everyday way to "ssh into a container" (though it is not SSH; no daemon, no auth, just nsenter underneath).


Filesystem Flags

-v / --mount: volumes and bind mounts

The most commonly-used and most confused flag set. Full details in Lesson 3.3. Preview:

# Bind mount: host path → container path
docker run -v /host/path:/container/path nginx

# Named volume: Docker-managed storage
docker run -v mydata:/var/lib/mysql mysql

# Read-only
docker run -v /etc/hosts:/etc/hosts:ro alpine cat /etc/hosts

# The more-verbose --mount (recommended — fewer surprises)
docker run --mount type=bind,source=/host/path,target=/container/path,readonly nginx
docker run --mount type=volume,source=mydata,target=/var/lib/mysql mysql
docker run --mount type=tmpfs,target=/app/cache,tmpfs-size=64m nginx

-v has legacy behavior (auto-creates host paths, accepts bare paths as bind mounts or volumes ambiguously). --mount is explicit and harder to get wrong.

--tmpfs: scratch space in RAM

docker run --tmpfs /tmp:size=100m alpine sh
# /tmp is a 100 MB tmpfs inside the container

Useful when the container writes a lot to a temp path and you want those writes to go to RAM (fast, not counted against the container's writable layer) and evaporate on exit.

-w: working directory

docker run -w /app -v $(pwd):/app node:20 npm test

Equivalent to cd /app before running the command. Often used with -v for dev loops where you mount your source into the container and run commands at the source path.


Networking Flags

Full coverage in Lesson 3.2. Preview:

-p (publish)

# Map host port 8080 to container port 80
docker run -p 8080:80 nginx

# Bind to a specific host interface only
docker run -p 127.0.0.1:8080:80 nginx

# Publish all EXPOSE'd ports to random host ports
docker run -P nginx  # uppercase P

# UDP
docker run -p 53:53/udp my-dns

Syntax: [host-ip:]host-port:container-port[/protocol]. Without -p, the container's ports are reachable from other containers on the same Docker network but not from the host or outside world.

WARNING

-p modifies the host's iptables rules to NAT traffic to the container. This bypasses most host firewalls (UFW, firewalld) because the rules are inserted into the nat table, not the filter table. If you ufw deny 8080 and then docker run -p 8080:80, 8080 is open. Use -p 127.0.0.1:8080:80 to bind only to localhost, or configure Docker's iptables behavior explicitly (userland-proxy: false, iptables: true, and your own rules).

--network

docker run --network=bridge nginx        # default; docker0 bridge
docker run --network=host nginx          # no isolation; uses host's network namespace
docker run --network=none nginx          # no network at all
docker run --network=my-custom-net nginx # user-defined bridge (see Lesson 3.2)

--hostname, --add-host

docker run --hostname myapp alpine hostname
# myapp

docker run --add-host db.local:10.0.0.5 alpine cat /etc/hosts
# 10.0.0.5  db.local  ← injected

Config Flags

-e (environment variables)

docker run -e NODE_ENV=production -e PORT=8080 myapp
docker run --env-file prod.env myapp

-e FOO=bar sets one. --env-file path reads from a file (one KEY=VALUE per line, no export, # for comments).

Secrets should not go in -e. Anyone with docker inspect access on the host can read them. Use Docker's secret mechanism (for Swarm) or the orchestrator's secret system (Kubernetes Secrets, with caveats). We cover this in Module 5.

--label

docker run --label app=myapp --label env=prod --label version=1.2.3 nginx

Arbitrary key-value metadata attached to the container. Useful for filtering (docker ps --filter label=env=prod) and orchestrator annotations.


Resource Limits (cgroup-backed)

All of these translate directly to cgroup writes (Linux course Module 5 Lesson 2).

# Memory
docker run --memory=512m nginx            # hard limit; OOM-kill if exceeded
docker run --memory=512m --memory-swap=512m nginx  # disable swap

# CPU
docker run --cpus=1.5 nginx               # 1.5 CPUs of time per wall-clock second
docker run --cpu-shares=512 nginx         # relative weight (default 1024)
docker run --cpuset-cpus="0,1" nginx      # pin to specific cores

# PIDs
docker run --pids-limit=256 nginx         # max 256 processes/threads in container

# I/O (less commonly used)
docker run --blkio-weight=300 nginx
docker run --device-read-bps /dev/sda:50mb nginx

Verify limits took effect:

docker run -d --name bounded --memory=512m --cpus=1 nginx
docker inspect bounded --format='{{.HostConfig.Memory}} {{.HostConfig.NanoCpus}}'
# 536870912 1000000000
#     ^512MiB    ^1 CPU (1e9 nanoseconds of CPU per second)

cat /sys/fs/cgroup/system.slice/docker-*.scope/memory.max
# 536870912

docker rm -f bounded

Identity and Security Flags

# Run as a specific user (UID or name from image)
docker run -u 1000:1000 alpine id
# uid=1000 gid=1000

docker run -u node node:20 id
# uid=1000(node) gid=1000(node)

# User namespaces — map container UID 0 to an unprivileged host UID
docker run --userns=host alpine id        # default-ish, no remap
docker run --userns=keep-id podman ...     # rootless podman pattern

# Read-only root filesystem
docker run --read-only --tmpfs /tmp alpine touch /app/file
# touch: /app/file: Read-only file system

# Drop and re-add Linux capabilities
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx  # only what you need

# Disable seccomp (dangerous!)
docker run --security-opt seccomp=unconfined alpine mount ...

# The nuclear option — effectively host-root
docker run --privileged ...  # DON'T use this for app workloads

All covered in more depth in Module 5.


Overriding the Image's Default Command

# Append arguments to the image's CMD
docker run nginx -g 'daemon off;'
# ↑ extra args go to nginx (the ENTRYPOINT)

# Override the entrypoint entirely
docker run --entrypoint sh nginx -c 'ls /etc/nginx'
# ↑ use sh instead of nginx, pass -c ls ... as its args

This is the "escape hatch" for debugging. When a container's ENTRYPOINT is crashing and you need to get a shell to investigate, --entrypoint sh (or --entrypoint bash for Debian-based images) is the canonical trick.


Putting It Together: Realistic Examples

Local dev: hot-reload Node.js app

docker run --rm -it \
    --name myapp \
    -v $(pwd):/app \
    -w /app \
    -p 3000:3000 \
    -e NODE_ENV=development \
    node:20-alpine \
    npm run dev

Bind-mounts your source, exposes the dev server port, sets env, runs the dev command. --rm cleans up; -it so you can see logs and Ctrl-C.

Long-running service with limits

docker run -d \
    --name api \
    --restart=unless-stopped \
    --memory=1g \
    --cpus=2 \
    --pids-limit=512 \
    --read-only \
    --tmpfs /tmp \
    --cap-drop=ALL \
    --cap-add=NET_BIND_SERVICE \
    -e DATABASE_URL=postgres://... \
    -p 127.0.0.1:8080:8080 \
    myorg/api:v1.2.3

Detached, will restart, resource-capped, read-only root, minimal caps, localhost-only port. A solid single-host production pattern.

One-shot batch job

docker run --rm \
    -v $(pwd)/data:/data \
    -v $(pwd)/output:/output:rw \
    myorg/batch-runner:v1 \
    --input /data/input.csv --output /output/report.csv

No detach, no restart, auto-remove. Inputs mounted read-only-ish, outputs mounted writable. Perfect for CI jobs.


Debugging What docker run Actually Did

docker run -d --name demo -p 8080:80 nginx

# See the full config that was applied
docker inspect demo

# Just the networking bits
docker inspect demo --format='{{json .NetworkSettings}}' | jq

# Just the cgroup / resource config
docker inspect demo --format='{{json .HostConfig}}' | jq '.Memory, .NanoCpus, .PidsLimit'

# Log output
docker logs demo
docker logs -f demo            # follow

# What's happening inside
docker top demo                # like ps inside the container
docker stats demo              # live resource usage
docker exec -it demo sh        # interactive shell

docker rm -f demo
PRO TIP

docker inspect is the authoritative source of "what is this container configured as?" Every flag you passed ends up as a field in its output. When a container misbehaves and you want to rule out a flag interpretation issue, docker inspect tells you what Docker actually applied. Pair with docker events & in another terminal to see every lifecycle event in real time.


Key Concepts Summary

  • docker run is a convenience macro for create + start + attach. You can do each step separately.
  • -d detaches; -it is interactive. Use -d for services, -it for shells, neither for one-shot foreground commands.
  • --rm cleans up on exit. Use for one-shots and tests to avoid accumulating stopped containers.
  • --restart=unless-stopped is the right choice for services you want to survive reboots (outside of an orchestrator).
  • -v / --mount for storage; -p for ports; -e for env. The Big Three for running most things.
  • Resource flags map to cgroups. --memory, --cpus, --pids-limit translate to cgroup files.
  • -u, --cap-drop, --read-only, --security-opt are the security baseline you should apply by default.
  • --entrypoint is the debug override. When an image's entrypoint crashes, --entrypoint sh -c gets you a shell.
  • docker inspect is the ground truth for what the container actually has applied.

Common Mistakes

  • Running docker run image in the foreground and then closing the terminal, killing the container. Use -d for services.
  • Using -p 8080:80 and being surprised it bypasses UFW. Docker edits iptables NAT rules; host firewalls in the filter table are bypassed.
  • Running docker run -d --rm crashing-image. When it crashes, the logs are gone before you can look. For debugging, drop --rm so you can docker logs.
  • Not pairing --memory with a container that knows about cgroups. The JVM before JDK 11 ignored cgroup memory limits and would OOM; modern runtimes respect cgroups.
  • Using shell-form docker run image sh -c "command" when a straight docker run image command arg would do. Shell-form introduces an extra sh as PID 1 (signals do not reach the real process).
  • Forgetting that containers launched with default bridge network use Docker's internal DNS only if you create a user-defined network. On the default bridge, containers cannot resolve each other by name.
  • Setting -v /host/path:/container/path with a path that does not exist on the host. Docker silently creates the host directory as root. You get a permission mismatch on first write.
  • Assuming --restart works across host reboots without Docker configured to start at boot. Enable the Docker service (systemctl enable docker) or containers will not come back after a host reboot.
  • Using --network=host in production "for performance." It disables network isolation entirely — every port your container listens on is a port on the host, including ones you did not mean to expose.

KNOWLEDGE CHECK

You run `docker run -p 8080:80 nginx` on an Ubuntu server with UFW configured to deny port 8080. From another machine, you can successfully curl `http://server:8080/`. Why, and how do you actually block public access?