Docker for Developers: Dockerfile Best Practices
Learn Dockerfile best practices including multi-stage builds, layer caching, security hardening, and common mistakes. Practical tips to build smaller, faster, more secure images.
Docker has become the standard way to package and deploy applications. But there is a wide gap between a Dockerfile that works and a Dockerfile that works well. Poorly written Dockerfiles produce bloated images, slow builds, and security vulnerabilities. This guide covers the practices that separate production-quality Docker images from “it works on my machine” containers.
Start with the Right Base Image
Your base image determines the starting size and attack surface of your container. Here is a comparison for a Node.js application:
| Base Image | Size |
|---|---|
node:20 | ~950 MB |
node:20-slim | ~200 MB |
node:20-alpine | ~130 MB |
gcr.io/distroless/nodejs20 | ~120 MB |
Alpine-based images are popular for their small size, but they use musl instead of glibc, which can cause compatibility issues with some native Node.js modules. The -slim variants use Debian with most extras removed — a good balance of size and compatibility.
Distroless images take it further by including only the application runtime. No shell, no package manager, no utilities. This dramatically reduces the attack surface.
# Good: specific version, slim variant
FROM node:20.11-slim
# Avoid: latest tag, full image
FROM node:latest
Always pin your base image to a specific version. node:latest today might be a different version tomorrow, breaking your build.
Layer Caching: Order Matters
Docker builds images layer by layer, and each instruction creates a new layer. If a layer has not changed since the last build, Docker reuses it from cache. This means the order of your Dockerfile instructions dramatically affects build speed.
The rule: Put things that change infrequently at the top, and things that change often at the bottom.
# Bad: copies everything, then installs dependencies
# Any source code change invalidates the npm install cache
COPY . .
RUN npm install
# Good: copies package files first, installs, then copies source
# Source code changes don't invalidate the npm install layer
COPY package.json package-lock.json ./
RUN npm install
COPY . .
This pattern alone can cut build times from minutes to seconds. The npm install layer is cached as long as package.json and package-lock.json have not changed — which is most of the time.
Multi-Stage Builds
Multi-stage builds are the single most impactful Dockerfile technique. They let you use one image for building and a different image for running:
# Stage 1: Build
FROM node:20-slim AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json ./
USER node
EXPOSE 3000
CMD ["node", "dist/index.js"]
The final image contains only the production artifacts — no source code, no build tools, no dev dependencies. For a Go application, the difference is even more dramatic:
# Build stage
FROM golang:1.22 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o server .
# Production stage
FROM gcr.io/distroless/static
COPY --from=builder /app/server /
CMD ["/server"]
The Go binary is statically linked, so the production image needs nothing but the binary itself. A 1.5 GB build image produces a 15 MB production image.
Security Best Practices
Don’t Run as Root
By default, containers run as root. If an attacker exploits your application, they have root access inside the container — and potentially the ability to escape it.
# Create a non-root user
RUN addgroup --system app && adduser --system --ingroup app app
USER app
Many base images include a non-root user. Node.js images have node, and you can activate it with USER node.
Use .dockerignore
A .dockerignore file prevents sensitive and unnecessary files from being copied into the image:
node_modules
.git
.env
.env.*
*.md
docker-compose*.yml
.github
tests
coverage
Without this file, COPY . . copies everything — including your .env file with production secrets, your .git directory, and your node_modules (which should be installed inside the container anyway).
Scan for Vulnerabilities
Build scanning into your CI pipeline:
docker scout cves myimage:latest
# or
trivy image myimage:latest
Fix vulnerabilities by updating your base image and dependencies. Keep base images current — security patches are released regularly.
Don’t Store Secrets in Images
Never put credentials, API keys, or certificates in your Dockerfile:
# WRONG: secret is permanently in a layer
ENV DATABASE_PASSWORD=hunter2
# WRONG: secret is in a layer even if deleted later
COPY credentials.json /app/
RUN setup-db.sh && rm credentials.json
Even if you delete a file in a later layer, it exists in the previous layer and can be extracted. Use Docker secrets, environment variables at runtime, or mount secrets during build with --mount=type=secret.
Optimizing Image Size
Combine RUN Commands
Each RUN instruction creates a layer. Combining related commands reduces layers and lets you clean up in the same layer:
# Bad: 3 layers, apt cache persists
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*
# Good: 1 layer, clean in same step
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
rm -rf /var/lib/apt/lists/*
The --no-install-recommends flag prevents apt from pulling in suggested packages you do not need.
Use npm ci Instead of npm install
npm ci is designed for automated environments:
COPY package.json package-lock.json ./
RUN npm ci --only=production
It installs exactly what is in the lock file, removes existing node_modules first, and is faster than npm install in clean builds.
Health Checks
Add a HEALTHCHECK instruction so Docker can monitor your container’s health:
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
Orchestrators like Docker Compose and Kubernetes use health checks to restart unhealthy containers automatically.
A Complete Example
Here is a production-ready Dockerfile for a Node.js application:
FROM node:20-slim AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build && npm prune --production
FROM node:20-slim
RUN apt-get update && \
apt-get install -y --no-install-recommends curl && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
USER node
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
CMD ["node", "dist/index.js"]
Generating Dockerfiles
Writing Dockerfiles from scratch for every new project involves remembering the right base images, the correct instruction order, and all the security and optimization details covered above. Our Dockerfile Generator scaffolds production-ready Dockerfiles for common stacks — Node.js, Python, Go, Rust, and more. It applies multi-stage builds, proper caching, non-root users, and health checks automatically, giving you a solid starting point to customize for your specific needs.
Key Takeaways
- Use specific, slim base images — never
latest - Order instructions for cache efficiency: dependencies before source code
- Use multi-stage builds to separate build and runtime environments
- Run as a non-root user
- Use
.dockerignoreto exclude sensitive and unnecessary files - Never embed secrets in images
- Combine
RUNcommands and clean up in the same layer - Add health checks for production containers
Good Dockerfiles are not just about making things work — they are about making things work safely, efficiently, and reproducibly across every environment.
Comments