← Back to blog

Docker for Web Development: A Practical Guide to Containers That Actually Helps

Why Docker Matters for Web Development

If you have ever spent half a day setting up a development environment, debugging why a project works on a colleague's laptop but not yours, or heard the words "but it works on my machine," Docker is the answer to a problem you already know well.

Docker packages an application and all its dependencies (runtime, libraries, system tools, configuration files) into a standardized unit called a container. That container runs identically on any machine that has Docker installed, whether it is a developer's MacBook, a Linux server in a data center, or a CI/CD pipeline runner. The environment is defined in code, versioned alongside your application, and reproducible from scratch in seconds.

This is not just about convenience. Environment inconsistency is a genuine source of bugs, deployment failures, and wasted time. When your development environment does not match production, bugs slip through testing and appear only after deployment. Docker eliminates this entire category of problems.

Containers vs Virtual Machines

If you have used virtual machines (VirtualBox, VMware, Parallels), containers might seem similar. Both provide isolated environments. But the architecture is fundamentally different, and the differences matter for day-to-day development.

Virtual Machines

A VM runs a complete operating system with its own kernel, on top of a hypervisor. Each VM includes a full OS installation (often several gigabytes), boots in minutes, and consumes significant CPU and RAM just to maintain the OS overhead. Running three VMs on a developer laptop with 16 GB of RAM is already pushing limits.

Containers

A container shares the host machine's operating system kernel. It only packages the application and its specific dependencies, not an entire OS. A typical container image for a Node.js application is 100-300 MB (compared to 2-4 GB for a VM). Containers start in seconds (not minutes). You can run dozens of containers on the same laptop that struggles with three VMs.

AspectVirtual MachineContainer
Isolation levelFull OS isolationProcess-level isolation
Startup timeMinutesSeconds
Image sizeGigabytesMegabytes
Resource overheadHigh (full OS per VM)Low (shared kernel)
DensityFew per hostDozens to hundreds per host
Use caseRunning different OS typesApplication packaging and isolation

For web development, containers are the right tool. You do not need the full isolation of a VM. You need reproducible environments that start fast and do not eat your laptop's resources.

Core Docker Concepts

Images

A Docker image is a read-only template that defines what the container will contain. Think of it as a snapshot of a configured environment. Images are built in layers: you start from a base image (like node:18-alpine or python:3.11-slim), add your application code, install dependencies, and configure settings. Each step creates a layer, and layers are cached for fast rebuilds.

Containers

A container is a running instance of an image. You can run multiple containers from the same image, each with its own writable layer on top. When a container is stopped and removed, its writable layer is lost (unless you use volumes to persist data). This ephemeral nature is a feature, not a bug: it ensures that containers always start from a known, clean state.

Volumes

Volumes are Docker's mechanism for persistent storage. They live outside the container filesystem, so data survives container restarts and rebuilds. In development, you use volumes to mount your local source code into the container, so changes you make in your editor appear immediately in the running container without rebuilding.

Networks

Docker containers can communicate with each other through virtual networks. When you use docker-compose (covered below), all services defined in the same compose file are automatically placed on a shared network and can reach each other by service name. Your Node.js application can connect to postgres://db:5432 where db is the name of the PostgreSQL service in your compose file.

The Dockerfile: Defining Your Environment

A Dockerfile is a text file that contains instructions for building a Docker image. Here is a practical example for a Node.js application:

FROM node:18-alpine AS base WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . EXPOSE 3000 CMD ["node", "server.js"]

Let us break this down line by line:

  • FROM node:18-alpine: Start from the official Node.js 18 image based on Alpine Linux (a minimal Linux distribution, keeping the image small).
  • WORKDIR /app: Set the working directory inside the container.
  • COPY package*.json ./: Copy only the package files first. This is a deliberate optimization: Docker caches layers, so if your package.json has not changed, the npm ci step is skipped on subsequent builds.
  • RUN npm ci --only=production: Install dependencies. npm ci is preferred over npm install for reproducible builds.
  • COPY . .: Copy the rest of your application code.
  • EXPOSE 3000: Document which port the application listens on.
  • CMD ["node", "server.js"]: Define the command to run when the container starts.

Multi-Stage Builds

For production images, multi-stage builds let you use one image for building (with dev dependencies, compilers, etc.) and a separate, minimal image for running:

FROM node:18-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build FROM node:18-alpine AS production WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules EXPOSE 3000 CMD ["node", "dist/server.js"]

The final image only contains the compiled output and production dependencies, not the source code, TypeScript compiler, or development tools. This keeps production images small and reduces the attack surface.

Docker Compose: Multi-Service Development Environments

Most web applications need more than just a runtime. You need a database, maybe a cache, maybe a message queue. Docker Compose lets you define and run multi-container environments with a single YAML file.

Node.js + PostgreSQL + Redis

version: '3.8' services: app: build: . ports: - "3000:3000" volumes: - .:/app - /app/node_modules environment: DATABASE_URL: postgres://user:password@db:5432/myapp REDIS_URL: redis://cache:6379 depends_on: - db - cache db: image: postgres:15-alpine environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: myapp volumes: - pgdata:/var/lib/postgresql/data ports: - "5432:5432" cache: image: redis:7-alpine ports: - "6379:6379" volumes: pgdata:

With this file saved as docker-compose.yml, running docker compose up starts your application, a PostgreSQL database, and a Redis cache. All three can communicate with each other by service name. The PostgreSQL data is persisted in a named volume (pgdata), so it survives container restarts.

The volume mount .:/app maps your local directory into the container, so your code changes are reflected immediately (with hot reloading if your app supports it). The /app/node_modules anonymous volume prevents the container's node_modules from being overwritten by the (potentially empty) local node_modules directory.

PHP + MySQL + Nginx

version: '3.8' services: web: image: nginx:alpine ports: - "8080:80" volumes: - ./src:/var/www/html - ./nginx.conf:/etc/nginx/conf.d/default.conf depends_on: - php php: build: context: . dockerfile: Dockerfile.php volumes: - ./src:/var/www/html db: image: mysql:8 environment: MYSQL_ROOT_PASSWORD: rootpass MYSQL_DATABASE: myapp MYSQL_USER: user MYSQL_PASSWORD: password volumes: - mysqldata:/var/lib/mysql ports: - "3306:3306" volumes: mysqldata:

This setup runs Nginx as a reverse proxy in front of PHP-FPM, with MySQL as the database. A common configuration for WordPress, Laravel, or custom PHP applications.

Python (Django/Flask) + PostgreSQL

version: '3.8' services: web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/app ports: - "8000:8000" environment: DATABASE_URL: postgres://user:password@db:5432/myapp depends_on: - db db: image: postgres:15-alpine environment: POSTGRES_USER: user POSTGRES_PASSWORD: password POSTGRES_DB: myapp volumes: - pgdata:/var/lib/postgresql/data volumes: pgdata:

The "Works on My Machine" Problem Solved

Before Docker, onboarding a new developer on a project went something like this:

  1. Clone the repository
  2. Install the correct version of Node.js (or PHP, or Python). Wait, your system has a different version. Use nvm or pyenv to manage multiple versions.
  3. Install the database. Configure it. Create the database and user. Run migrations.
  4. Install Redis. Or Elasticsearch. Or whatever else the project needs.
  5. Configure environment variables. Half of them are documented, the other half are tribal knowledge.
  6. Run the application and hope it works. If not, debug for several hours.

With Docker, the process becomes:

  1. Clone the repository
  2. Run docker compose up
  3. The application is running with all dependencies configured

That is not an exaggeration. When the development environment is fully defined in a Dockerfile and docker-compose.yml, every developer works with the same stack, same versions, same configuration. No more "I have PostgreSQL 14 and you have 15, and the behavior is different." No more "my system Python is 3.9 but the project needs 3.11."

For teams in Lugano and across Switzerland, where development teams are often small and every developer's time is valuable, the hours saved on environment setup and debugging environment-specific issues add up fast.

Docker for CI/CD Pipelines

Docker does not just improve local development. It transforms CI/CD (Continuous Integration / Continuous Deployment) by providing consistent environments across the entire pipeline.

Consistent Testing

Your CI server (GitHub Actions, GitLab CI, Jenkins) runs tests inside the same Docker containers used in development. If tests pass locally, they pass in CI, because the environment is identical. No more "tests pass on my machine but fail in CI" mysteries.

Reproducible Builds

Building a Docker image from a Dockerfile produces the same result regardless of where it is built. Your CI pipeline builds the production image, runs tests against it, and if everything passes, pushes it to a container registry. The exact same image that was tested is deployed to production.

Simplified Deployment

Deploying a Docker container to production is straightforward: pull the image from the registry and start the container. No more installing dependencies on production servers, no more "the production server has a different version of libssl" problems. Platforms like Railway, Render, Fly.io, and AWS ECS make container deployment even simpler.

Docker Security Basics

Containers provide process-level isolation, but they are not a security silver bullet. Here are the security fundamentals every developer should follow:

Do Not Run as Root

By default, processes inside Docker containers run as root. If an attacker breaks out of the application, they have root access inside the container. Add a non-root user to your Dockerfile:

RUN addgroup -S appgroup && adduser -S appuser -G appgroup USER appuser

Use Minimal Base Images

Use Alpine-based images (node:18-alpine, python:3.11-alpine) instead of full Debian/Ubuntu images. Fewer packages mean fewer potential vulnerabilities. A node:18 image contains hundreds of system packages that your application does not need and that could contain security flaws.

Do Not Store Secrets in Images

Never put passwords, API keys, or certificates in a Dockerfile or bake them into an image. Use environment variables, Docker secrets (in Swarm mode), or external secret management tools (HashiCorp Vault, AWS Secrets Manager).

Scan Images for Vulnerabilities

Tools like Trivy, Docker Scout, and Snyk can scan your Docker images for known vulnerabilities in base images and installed packages. Integrate these scans into your CI pipeline so you catch issues before deployment.

Keep Images Updated

Base images receive security updates regularly. Rebuild your images periodically to pick up these updates. Use specific version tags (not :latest) so you control when updates are applied, but do not let your images go months without rebuilding.

For more on web application security, see our OWASP Top 10 guide and our website security checklist for SMEs.

When Docker Is Overkill

Docker is not always the right answer. Here are situations where the overhead may not be justified:

  • Simple static sites: If you are building a static HTML/CSS/JS site or using a static site generator with no backend dependencies, Docker adds complexity without proportional benefit. Just run the dev server locally.
  • Solo projects with simple stacks: If you are the only developer, using a single runtime (Node.js or Python) with no database or external services, the environment consistency benefits are minimal.
  • Teams unfamiliar with containers: If your entire team has never used Docker, introducing it during a tight deadline will slow you down. Learn Docker on a side project or during a lower-pressure period.
  • Resource-constrained machines: Docker Desktop on macOS and Windows runs a Linux VM under the hood, which consumes RAM and CPU. On a machine with 8 GB of RAM, running Docker alongside an IDE and a browser can be tight.

Performance Considerations

Docker introduces some performance overhead that developers should be aware of:

File System Performance on macOS

Docker on macOS uses a Linux VM, and file system operations between the macOS host and the Linux container are slower than native file access. This is noticeable with large node_modules directories or frameworks that watch thousands of files for changes. Docker Desktop has improved this significantly with VirtioFS, but it is still measurably slower than native development.

Mitigation strategies:

  • Use the :cached or :delegated volume mount flags (pre-VirtioFS)
  • Keep node_modules inside the container (using an anonymous volume)
  • Use VirtioFS in Docker Desktop settings (default in newer versions)

Build Times

Docker builds can be slow if not optimized. Key optimizations:

  • Order Dockerfile instructions from least to most frequently changing. Put package installation before source code copy, so dependency installation is cached when only code changes.
  • Use .dockerignore to exclude unnecessary files (node_modules, .git, test data) from the build context.
  • Use multi-stage builds to keep final images small.
  • Leverage BuildKit (the modern build engine) for parallel stage execution and better caching.

Essential Docker Commands

A quick reference for the commands you will use daily:

CommandWhat It Does
docker compose upStart all services defined in docker-compose.yml
docker compose up -dStart in detached mode (background)
docker compose downStop and remove all containers
docker compose logs -fFollow log output from all services
docker compose exec app shOpen a shell inside the running app container
docker build -t myapp .Build an image from the current directory
docker psList running containers
docker system pruneRemove unused images, containers, and networks

Getting Started Today

If you have never used Docker, here is a concrete path to get started:

  1. Install Docker Desktop (macOS, Windows) or Docker Engine (Linux).
  2. Pick a project you are actively working on.
  3. Write a Dockerfile for your application, starting from the official base image for your runtime.
  4. Create a docker-compose.yml if your project uses a database or other services.
  5. Run docker compose up and verify that the application works.
  6. Commit the Dockerfile and docker-compose.yml to your repository so the entire team benefits.

The initial setup takes an hour or two. The time savings start from the second developer who joins the project and do not stop. If you need help setting up Docker for your web development workflow, or if you want to containerize an existing application for consistent deployments, our team in Lugano can help.

Want to know if your site is secure?

Request a free security audit. In 48 hours you get a complete report.

Request Free Audit

Quick Contact