InterviewBiz LogoInterviewBiz
← Back
What Is a Container and How Does Docker Work?
software-engineeringmedium

What Is a Container and How Does Docker Work?

MediumHotMajor: software engineeringdocker, google

Concept

A container is a lightweight, portable unit that packages an application together with its dependencies, libraries, and configuration files, ensuring it runs consistently across different environments.

Docker is the most popular containerization platform, providing tools to build, ship, and run containers using standardized images and an efficient runtime engine.

In essence, Docker containers bring virtualization without full virtual machines, making them faster, smaller, and more efficient for modern DevOps and cloud-native workflows.


1. Containers vs Virtual Machines

AspectContainersVirtual Machines
Abstraction LevelOS-levelHardware-level
Startup TimeSecondsMinutes
Resource OverheadLow (shared kernel)High (separate OS per VM)
IsolationProcess-basedFull OS-based
Image SizeLightweight (MBs)Heavy (GBs)
PortabilityHighModerate

Example (safe for MDX):

Host OS
 ├── Docker Engine
 │   ├── Container A (Node.js)
 │   └── Container B (Python)

Each container shares the same OS kernel but runs independently with its own filesystem and dependencies.


2. How Docker Works

Docker uses Linux kernel features to isolate and manage containers efficiently.

Key Components:

  1. Docker Images

    • Immutable templates that define what’s inside a container (e.g., OS, libraries, app code).
    • Built using a Dockerfile.

    Example (safe for MDX):

    FROM node:18
    COPY . /app
    WORKDIR /app
    RUN npm install
    CMD ["npm", "start"]
    
  2. Docker Containers

    • Running instances of images.
    • Lightweight, ephemeral environments that can be started or destroyed quickly.
  3. Docker Daemon (dockerd)

    • Background service managing images, containers, networks, and volumes.
  4. Docker CLI / API

    • Command-line interface to interact with Docker.
    • Example: docker run, docker build, docker ps.
  5. Docker Hub / Registry

    • Central repository for storing and sharing Docker images.

Core Mechanisms:

  • Namespaces: Provide process isolation (PID, network, mount, user).
  • Control Groups (cgroups): Limit resource usage (CPU, memory, I/O).
  • Union File Systems (OverlayFS): Enable layered image construction and sharing.

Workflow (safe for MDX):

1. docker build → creates image
2. docker run → starts container
3. docker push/pull → share via registry

3. Advantages of Using Docker

1. Consistency Across Environments

  • Containers encapsulate dependencies, avoiding “works on my machine” issues.
  • Identical behavior in dev, test, and production.

2. Portability

  • Runs anywhere — local machine, cloud VM, Kubernetes cluster, or CI/CD runner.

3. Efficiency

  • Multiple containers share the same OS kernel, using fewer resources than VMs.
  • High density of deployment on the same host.

4. Scalability and Speed

  • Containers start in seconds, enabling auto-scaling and rolling updates.
  • Perfect for microservices architectures.

5. Integration with DevOps

  • Works seamlessly with CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI).
  • Simplifies building, testing, and deployment automation.

6. Reproducibility

  • Dockerfiles ensure deterministic builds and versioned environments.

4. Real-World Example

Scenario: A developer deploys a Node.js API to production.

Without Docker:

  • Different Node versions on local and server environments.
  • Dependency mismatches cause runtime errors.

With Docker:

  1. Developer defines a Dockerfile with exact dependencies.
  2. Builds image and pushes it to Docker Hub.
  3. Production pulls and runs the same container image.

Result: 100% consistent execution across all stages.


5. Docker in the Cloud-Native Ecosystem

Docker serves as the foundation of cloud-native infrastructure and integrates tightly with orchestration systems.

  • Kubernetes: Schedules and scales Docker containers automatically.
  • AWS ECS / Fargate: Runs containers serverlessly in the cloud.
  • Docker Compose: Manages multi-container applications (e.g., app + DB + cache).

Example (safe for MDX):

version: '3'
services:
  app:
    build: .
    ports:
      - "8080:8080"
  redis:
    image: redis:7

This setup launches both the app and Redis service with one command.


6. Container Lifecycle

StageCommandDescription
Builddocker build -t app .Create image from Dockerfile
Rundocker run -p 8080:8080 appStart container from image
Stopdocker stop container_idGracefully stop container
Removedocker rm container_idDelete container
Push/Pulldocker push/pullShare via registry

Containers are stateless by default, but persistent data can be managed using Docker Volumes or bind mounts.


7. Security and Isolation

  • Isolation: Namespaces ensure containers don’t access each other’s processes.
  • Resource Control: cgroups limit CPU/memory usage per container.
  • Image Verification: Docker Hub supports signed images (Docker Content Trust).
  • Runtime Security: Tools like Aqua, Trivy, and Falco scan vulnerabilities.

However, containers share the host kernel, meaning they’re not as isolated as full VMs. Best practice: use minimal base images and regular vulnerability scanning.


8. Common Interview Discussion Points

  • Docker vs Virtual Machine differences.
  • Role of Dockerfile and image layering.
  • How Docker achieves isolation (namespaces, cgroups).
  • Difference between Docker Compose and Kubernetes.
  • How to persist data (volumes).
  • Docker in CI/CD and microservices ecosystems.

Interview Tip

  • Start by defining containerization and Docker’s role.

  • Use clear analogies — e.g.,

    “A container is like a shipping container for code — self-contained, standardized, and portable.”

  • Mention that Docker underpins modern DevOps, CI/CD, and cloud-native infrastructure.

  • If asked about orchestration, reference Kubernetes or ECS.


Summary Insight

Containers encapsulate everything an application needs — ensuring consistency, speed, and portability. Docker industrialized containerization, enabling modern DevOps pipelines and cloud-native scalability with lightweight isolation.