Docker Demystified

Docker Demystified

The Ultimate Guide to Containerized Applications

ยท

6 min read

Think of Docker as a way to package and ship applications, along with their dependencies.

Docker uses a technology called containerization, which is similar to virtualization but more lightweight. Instead of creating a full virtual machine, Docker containers share the host operating system's kernel, making them faster and more efficient.

As depicted in the above picture, virtual machines (VMs) offer the ability to run different operating systems, such as Windows and Linux, simultaneously on the same physical machine. However, when it comes to containers, they share the underlying host operating system, meaning that containers are limited to running the same operating system as the host.

In other words, if the host operating system is Linux, you can run Linux containers on it. This is because containers utilize the host operating system's kernel for their execution. Consequently, running Windows containers on a Linux host would not be possible, as they require a Windows kernel.

Upon installing Docker Desktop, the left panel typically displays three primary sections: Containers, Images, and Volumes. These sections represent key components and functionalities within the Docker environment.

Please refer to https://docs.docker.com/engine/reference/commandline/docker/ for details regarding the docker commands used in this article.

Images

In Docker, an image is a lightweight, standalone, and executable package that contains everything needed to run a piece of software, including the code, runtime, libraries, and system tools. It serves as a blueprint or template from which Docker containers are created.

# List all docker images on local
docker image ls
  • Docker Hub - A Repository
    A Docker repository is a central storage location for Docker images. It is a collection of tagged images that are versioned and can be shared among users or systems. Docker repositories serve as a distribution mechanism for Docker images, allowing users to easily pull and push images to and from the repository. They provide a centralized location where Docker images can be stored, managed, and accessed by others.
# Pull postgres image from docker hub
docker pull postgres
  • Layers in Docker Image
    In Docker, an image is composed of multiple layers. Each layer represents a specific change or addition to the filesystem, allowing for efficient storage and reuse of common components across different images.

    When a Docker image is built, each step in the Dockerfile results in a new layer being added to the image. For example, if the Dockerfile contains instructions to install packages, each package installation is recorded as a separate layer. Layers are lightweight and only store the differences from the previous layer, resulting in efficient disk usage.

  • Layer Caching
    Layer caching is a feature in Docker that takes advantage of the layer-based architecture. When building an image, Docker checks if a layer with the same instructions already exists in its cache. If it does, Docker reuses the cached layer instead of rebuilding it, saving time and resources.

Containers

A container is a sandboxed process on your machine that is isolated from all other processes on the host machine. Each container has its isolated file system, network interfaces, and process space, ensuring that applications running within different containers do not interfere with one another.

As we now have an idea about images and containers, Let's try spinning up a container

# Run nginx server
docker run --name my-web-server -d nginx

This command would have started an Nginx server on your machine. But wait, you couldn't interact with it right? There is something called Port Mapping which we should be knowing.

# Stop the previous container
docker stop my-web-server

# Run nginx server with port mapping
docker run --name nginx-server -p 8080:80 -d nginx
๐Ÿ’ก
Using the same container name again might throw an error "The container name "/my-web-server" is already in use". To solve this either use a different container name or delete the old container from Docker desktop GUI

What does "-p 8080:80" do?
This maps port 80 of the container ( Since nginx runs on the default port 80. This can be seen in the image details of Docker Hub ) to our Host machine's port 8080. Still confused? Open your browser and hit http://localhost:8080/.

# List all docker containers
docker container ls
  • Run Multiple Containers
    Yes, You could run multiple containers as well. Two versions of nginx on two different ports would be an ideal use case

      docker run --name server-one -p 8080:80 -d nginx
      docker run --name server-two -p 8081:80 -d nginx
    

Volumes

In Docker, a volume is a way to persist and share data between containers and the host machine. It provides a means for storing and accessing data separately from the container's file system.

A Docker volume is a directory or a file in the host machine's file system that is mounted into a container at a specific path. This enables containers to read from and write to the volume, allowing data to be preserved even if the container is stopped or removed.

Hands-On

Let's try running an express application on one container that communicates with Mongodb on another container.

Now for these two containers to communicate with each other, they need to be in the same network.

# Create a network named "mongo-net"
docker network create -d bridge mongo-net
# Run MongoDB
docker run -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=password --name mongo-latest --network mongo-net -p 27017:27017 -d mongo
  • Access the mongo shell inside the container

      # Create a TTY to the container "mongo-latest"
      # This would land you in root@<container_id>:/#
      docker exec -it mongo-latest bash
    
      # Access Mongo Shell inside the container
      mongosh -u root -p password
    
      # To exit the mongo shell
      exit
    
      # To exit the container bash shell
      exit
    
# Run express application on port 8081
docker run -e ME_CONFIG_MONGODB_ADMINUSERNAME=root -e ME_CONFIG_MONGODB_ADMINPASSWORD=password -e ME_CONFIG_MONGODB_URL=mongodb://root:password@mongo-latest:27017/ --network mongo-net --name mongo-exp -p 8081:8081 -d mongo-express

Now we have two containers (MongoDB and Express application) communicating with each other.

Try opening http://localhost:8081/

Isn't all these commands too long making your terminal messy ๐Ÿคฏ

Docker Compose comes to the rescue!

Docker Compose simplifies the process of running and managing these containers as a group. Using Docker Compose, you can specify things like which images to use, the ports to expose, environment variables, and how the containers should connect. It's like providing instructions to Docker Compose on how to set up and run your application's various components.

Once you have defined the configuration in the docker-compose.yml file, you can use a simple command to start all the containers together. Docker Compose takes care of creating the necessary networks, and volumes, and linking the containers according to your specifications.

Docker Compose also provides commands to stop, restart, or scale your containers. It simplifies the process of managing and orchestrating your application's infrastructure, especially when you have complex setups with multiple interconnected containers.

Let's try our mongo-express application on docker-compose

Create a file docker-compose.yaml

version: '3.1'

services:

  mongo-latest:
    image: mongo
    ports:
      - 27017:27017
    environment:
      MONGO_INITDB_ROOT_USERNAME: root
      MONGO_INITDB_ROOT_PASSWORD: password

  mongo-express:
    image: mongo-express
    restart: always
    ports:
      - 8081:8081
    environment:
      ME_CONFIG_MONGODB_ADMINUSERNAME: root
      ME_CONFIG_MONGODB_ADMINPASSWORD: password
      ME_CONFIG_MONGODB_URL: mongodb://root:password@mongo-latest:27017/
# Start the containers
docker-compose -f docker-compose.yaml up

In this blog post, we've peeled back the layers of Docker, unveiling its simplicity and showcasing its potential. Whether you're a developer, system administrator, or technology enthusiast, Docker's magic can empower you to unlock new possibilities in application deployment. Embrace Docker, and embark on a journey of simplified and efficient software delivery.

Happy Hacking! ๐Ÿš€

ย