Containerization with Docker: A Beginner's Guide for Web Developers
Learn containerization basics with Docker: build, run, and deploy applications seamlessly across different environments. Beginner's guide for web developers.
As a web developer, you've probably heard of containerization and Docker, but you may be unsure of what they are and how they work. Containerization is a method of virtualization that allows you to run applications in isolated environments, while Docker is a popular containerization platform that simplifies the process of building, deploying, and running applications in containers. In this beginner's guide, i'll walk you through the basics of containerization with Docker and show you how to get started with containerizing your web applications.
What Exactly is Docker
At its core, Docker is a tool that helps developers build and run software applications in a way that is efficient, reliable, and scalable. It accomplishes this by "containerizing" applications, which means isolating them from the underlying system and packaging them with all the necessary components they need to run.
So why is this important? π
Well, when you're developing software, you want to be able to easily test and deploy your applications without worrying about compatibility issues or dependencies on other software components. Docker makes this process much easier by providing a consistent and reliable environment for your applications to run in.
Some of the benefits of using Docker include:
- Portability π¦: Because Docker containers are self-contained and isolated from the underlying system, they can be easily moved from one environment to another without any compatibility issues.
- Scalability π: Docker allows you to easily scale your applications up or down depending on your needs, by spinning up or shutting down containers as necessary.
- Consistency β : With Docker, you can ensure that your applications are running in a consistent and reliable environment, which makes it easier to troubleshoot issues and ensure that your software is running as expected.
Some common use cases for Docker include:
- Microservices architecture: Docker is often used in microservices architecture to containerize each individual service, making it easier to manage and scale the overall application.
- Continuous integration and deployment (CI/CD): Docker can be used in CI/CD pipelines to build and test applications in a consistent environment, and then deploy them to production using container orchestration tools like Kubernetes.
- DevOps: Docker is a popular tool in the DevOps world, as it allows developers and operations teams to work together more seamlessly by providing a consistent environment for testing and deployment.
Overall, Docker is a powerful tool that can help developers build and run applications more efficiently and reliably. By containerizing your applications, you can ensure that they run consistently across different environments, and easily scale them up or down as necessary.
Installing Docker
Docker is a powerful containerization platform that can help you streamline your software development process. But before you can start using Docker, you'll need to install it on your machine. Fortunately, Docker provides easy-to-use installation packages for a variety of operating systems. here, we're going through the steps of installing Docker on Ubuntu.
First, update the package index and install the necessary dependencies:
$ sudo apt update
$ sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
Next, add the Docker GPG key to your system:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Add the Docker repository to your system's sources list:
$ echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Update the package index again:
$ sudo apt update
Finally, install Docker using the following command:
$ sudo apt install docker-ce docker-ce-cli containerd.io
After the installation is complete, verify that Docker is running by running the following command:
$ sudo systemctl status docker
If Docker is running, you should see output like this one indicating that the service is active and running:
β docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2023-03-30 09:00:00 PDT; 5 seconds ago
Docs: https://docs.docker.com
Main PID: 1234 (dockerd)
Tasks: 15
Memory: 46.6M
CGroup: /system.slice/docker.service
ββ1234 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ββ2345 docker-containerd --config /var/run/docker/containerd/containerd.toml
ββ3456 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1234...
ββ4567 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/5678...
Mar 30 09:00:00 server systemd[1]: Starting Docker Application Container Engine...
Mar 30 09:00:01 server dockerd[1234]: time="2023-03-30T09:00:01.060788357-07:00" level=info msg="Starting up"
Mar 30 09:00:01 server dockerd[1234]: time="2023-03-30T09:00:01.066069561-07:00" level=info msg="parsed scheme: \"unix\""
Mar 30 09:00:01 server dockerd[1234]: time="2023-03-30T09:00:01.066194880-07:00" level=info msg="scheme \"unix\" not registered, fallback to...y"
Mar 30 09:00:01 server dockerd[1234]: time="2023-03-30T09:00:01.066313699-07:00" level=info msg="ccResolverWrapper: sending update to cc: {[u...
Mar 30 09:00:01 server dockerd[1234]: time="2023-03-30T09:00:01.066404257-07:00" level=info msg="ClientConn switching balancer to \"pick_firs...)"
Mar 30 09:00:08 server dockerd[1234]: time="2023-03-30T09:00:08.394551267-07:00" level=info msg="API listen on /var/run/docker.sock"
Mar 30 09:00:08 server systemd[1]: Started Docker Application Container Engine.
That's it! Once you've completed these steps, you should have Docker up and running on your Ubuntu machine.
Building Your First Docker Container
Before building your first Docker container, it's important to have a directory with all the necessary files. In this example, we'll assume that you have a directory named "myapp" with the following files:
- app.py
- requirements.txt
Here's a step-by-step guide, along with sample codes, to help you build your first Docker container:
Write The Code for Our Application
Here's a simple example of what the app.py file might contain. simply copy and paste this code to the file on your system:
# Sample app.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
Now we should add the required packages for our application to requirements.txt file:
Flask==2.1.2
Create a Dockerfile
Now, create a new file called "Dockerfile" with the following contents inside your "myapp" directory where app.py and requirements.txt exist:
# Choose a base image
FROM python:3.9-slim-buster
# Set the working directory
WORKDIR /app
# Copy the requirements file and install dependencies
COPY requirements.txt .
RUN pip3 install -r requirements.txt
# Copy the rest of the application files
COPY . .
# Specify the default command
CMD ["python3", "app.py"]
Now that you have done all the previous steps, your directory should look like this:
- Dockerfile
- app.py
- requirements.txt
It's good to have a look at each command we wrote in our Dockerfile and see what it does:
- Choose a base image: In the Dockerfile, we're using the "python:3.9-slim-buster" base image. This is a lightweight version of Python 3.9, based on the Debian Buster distribution.
- Set the working directory: We set the working directory to "/app" using the WORKDIR command. This is where our application code will be copied to inside the container.
- Copy the requirements file and install dependencies: Next, we copy the "requirements.txt" file from the host machine to the container using the COPY command. Then we use pip3 to install all the required dependencies listed in the requirements file.
- Copy the rest of the application files: We then copy the rest of the files in the "myapp" directory to the container using the COPY command. The first dot (".") refers to the source directory on the host machine, which is the current working directory (i.e., "myapp"). The second dot (".") refers to the destination directory inside the container, which is "/app".
- Specify the default command: Finally, we specify the default command that should be run when the container starts up. In this case, we're telling the container to run the "app.py" file using Python 3.
Now, we're all set to build our Docker container! πππ
To build the Docker container, navigate to the "myapp" directory in your terminal and run the following command:
$ sudo docker build -t myapp .
This will create a new Docker image named "myapp" based on the instructions in the Dockerfile.
Once the image has been built, you can run the container using the following command:
$ sudo docker run -p 5000:5000 myapp
This will start the container and map port 5000 on the host machine to port 5000 inside the container. You should now be able to access your application by navigating to http://localhost:5000 in your web browser.
And that's it! With these steps, you've successfully built your first Docker container for your Python application.
Managing The Containers
Now that you've successfully build and run a Docker container, it's to find out how to manage these containers as well.
you can use the docker ps
command to view a list of running containers:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
123abc456def myapp "python app.py" 5 minutes ago Up 5 minutes 0.0.0.0:5000->5000/tcp eager_leakey
We can see that there is one running container with the ID 123abc456def
, using the myapp
image, and listening on port 5000.
To stop the container, you can use the docker stop
command followed by the container ID or name:
$ sudo docker stop 123abc456def
123abc456def
This command will send a signal to the container to stop gracefully. The container should stop within a few seconds. You can verify that the container has stopped by running the docker ps
command again:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
In this case, the docker ps
command doesn't show any running containers, since we have stopped the only container that was running.
If for some reason the container doesn't stop gracefully, you can use the docker kill
command to force it to stop immediately:
Understanding Docker Images and Registries
Docker images and registries are important concepts to understand when working with Docker.
In simple terms, a Docker image is a lightweight, standalone, and executable package that contains everything needed to run an application, including the code, dependencies, and system libraries. It is essentially a snapshot of a specific version of an application and its environment.
Docker images are created using a Dockerfile, which is a text file that contains instructions for building the image. These instructions can include things like installing packages, copying files, and setting environment variables.
Once an image is created, it can be stored in a Docker registry, which is a centralized location for storing and managing Docker images. There are many public and private Docker registries available, including Docker Hub, which is the default public registry for Docker images.
Here are some example commands for working with Docker images:
Pulling an image from a registry:
$ sudo docker pull nginx
Output:
Using default tag: latest
latest: Pulling from library/nginx
1b0b43d710d1: Pull complete
33e0ff2b798d: Pull complete
6b8cb6a3d6ea: Pull complete
Digest: sha256:6c825116ec978171c6a5049f7f401cc07d75f42af5b13c5e77fb20f90c1e7d81
Status: Downloaded newer image for nginx:latest
This command pulls the latest version of the Nginx image from Docker Hub and stores it locally on your machine.
Listing images stored on your machine:
$ sudo docker images
Output:
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 9fa3745a47de 2 minutes ago 133MB
This command lists all the images that are currently stored on your machine.
Removing an image from your machine:
$ sudo docker rmi nginx
Output:
Untagged: nginx:latest
Untagged: nginx@sha256:6c825116ec978171c6a5049f7f401cc07d75f42af5b13c5e77fb20f90c1e7d81
Deleted: sha256:9fa3745a47dee3a280a4d4c26cb2b92a4c76765f6cf28c6ed8ec59f24c80434a
This command removes the Nginx image from your machine.
Updating an image:
$ sudo docker pull nginx:latest
Output:
latest: Pulling from library/nginx
1b0b43d710d1: Already exists
33e0ff2b798d: Already exists
6b8cb6a3d6ea: Already exists
Digest: sha256:6c825116ec978171c6a5049f7f401cc07d75f42af5b13c5e77fb20f90c1e7d81
Status: Image is up to date for nginx:latest
This command updates the Nginx image to the latest version available on Docker Hub.
All You Should Know About Docker Image Tags
When using Docker, image tags are used to label and version the images. When you want to pull an image from a registry, you can specify the tag to pull a specific version of the image. If no tag is specified, Docker will pull the "latest" tag by default.
This can be a problem when relying on an image with the "latest" tag for your application, because the image creator might update the "latest" tag with a new version that has security vulnerabilities π¨. If this happens, any applications using the "latest" tag will also be vulnerable.
To avoid this issue, it's best to use specific version tags when pulling Docker images, instead of relying on the "latest" tag. This ensures that the same version of the image is used consistently across different environments and prevents unexpected updates.
Working with Docker Compose
Docker Compose is a tool used for defining and running multi-container Docker applications. It allows you to define all of your application's services, networks, and volumes in a single docker-compose.yml
file, making it easier to manage your containers as a group.
To start using Docker Compose, you'll need to create a docker-compose.yml
file in your project directory. Let's use the example code we wrote earlier to create a Docker Compose file:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
This Docker Compose file defines a single service called "web" that will be built using the .
directory as the build context. It also maps port 5000 on the host to port 5000 in the container.
To start the services defined in your Docker Compose file, use the docker compose up
command.
$ sudo docker compose up
Building web
Step 1/4 : FROM python:3.9
---> 46249f1c5784
Step 2/4 : WORKDIR /app
---> Using cache
---> 6e2b6e8f6dcf
Step 3/4 : COPY . .
---> Using cache
---> 798c3d3b3fc3
Step 4/4 : CMD ["python", "app.py"]
---> Using cache
---> 2722ffed9d99
Successfully built 2722ffed9d99
Successfully tagged myapp_web:latest
Creating myapp_web_1 ... done
Attaching to myapp_web_1
web_1 | * Serving Flask app 'app' (lazy loading)
web_1 | * Environment: production
web_1 | WARNING: This is a development server. Do not use it in a production deployment.
web_1 | Use a production WSGI server instead.
web_1 | * Debug mode: on
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
web_1 | * Restarting with stat
web_1 | * Debugger is active!
web_1 | * Debugger PIN: 263-494-858
The docker compose up
command will build the Docker image (if it hasn't already been built) and start the container. You can access your application by visiting http://localhost:5000
in your web browser. To stop the container, simply press ctrl+c.
You can use the -d
flag to start the containers in detached mode (in the background):
$ sudo docker compose up -d
To stop the running containers, use the docker compose down
command:
$ sudo docker compose down
Stopping myapp_web_1 ... done
Removing myapp_web_1 ... done
Removing network myapp_default
If you've made changes to your Dockerfile or any of your application's code, you can use the --build
flag to force Docker Compose to rebuild the Docker image before starting the container:
$ sudo docker compose up --build
The docker-compose file we just wrote is quite basic and only includes one service. You may be wondering about its usefulness in such a scenario. However, if your application requires multiple services to run, managing and configuring each container individually can be a challenging task.
Take this code for example, where our web application requires MySQL and Redis:
version: '3'
services:
db:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- db_data:/var/lib/mysql
redis:
image: redis:alpine
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- redis
volumes:
db_data:
Although this example includes only three services, in more complex scenarios, you may need tens of services to work together.
Deploying Your Dockerized Application to a Server
Deploying a Dockerized application to a server involves a few steps. First, you need to push the build image to a Docker registry, such as Docker Hub or your own private registry, so that it can be pulled on the server. Then, you can either copy your entire project, including the Docker Compose file, to the server or create the necessary files directly on the server.
Here are the steps to push the build image to a Docker registry and pull it on the server.
First, Log in to your Docker registry account using the docker login
command:
$ sudo docker login <registry-url>
Tag the Docker image with the registry URL and version:
$ sudo docker tag <image-name> <registry-url>/<image-name>:<version>
Push the Docker image to the registry:
$ sudo docker push <registry-url>/<image-name>:<version>
On the server, pull the Docker image from the registry:
# docker pull <registry-url>/<image-name>:<version>
Once you have the Docker image on the server, you can simply the Docker Compose file, to the server or create the necessary files directly on the server. Then, run the following command to start your application:
# docker compose up -d
This will start all the services defined in the Docker Compose file in detached mode. You can check the status of the containers using the docker ps command.
That's it! Your Dockerized application is now deployed and running on the server.
Best Practices for Using Docker in Web Development
Ψ’ere are some best practices for using Docker in web development:
Use a .dockerignore file to optimize build times
When building a Docker image, it's important to avoid copying unnecessary files into the image. This can slow down build times and result in larger image sizes. To avoid this, use a .dockerignore file to specify files and directories that should be excluded from the build context. Here's an example .dockerignore file:
node_modules
*.log
Keep your images small and simple π
Large and complex images can slow down deployments and increase the risk of errors. Keep your images small and simple by using the official Docker images when possible, minimizing the number of layers in your Dockerfile, and removing unnecessary dependencies.
# Use the official Node.js image as the base
FROM node:18-alpine
# Set the working directory
WORKDIR /app
# Install dependencies
COPY package.json yarn.lock ./
RUN yarn install --production
# Copy the application code
COPY . .
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["yarn", "start"]
Use environment variables for configuration
Avoid hardcoding configuration values in your Dockerfile or application code. Instead, use environment variables to pass configuration values into your container at runtime. This makes it easier to manage and update configuration values, and makes your application more flexible.
Use volumes for persistent data
When running containers, any data written to the container's filesystem is lost when the container is stopped. To persist data between container restarts, use volumes to mount a directory from the host machine into the container. This allows the data to be stored outside of the container and persists even when the container is stopped or removed.
# Start a PostgreSQL container with a named volume
$ sudo docker run --name mydb -v mydata:/var/lib/postgresql/data -e POSTGRES_PASSWORD=password -d postgres
# Start the application container with the named volume mounted
$ sudo docker run --name myapp -v mydata:/app/data -p 3000:3000 myimage
Avoid running containers as root π¦Ή
Running containers as the root user can pose a security risk. Instead, create a new user inside the container and run the container as that user. This limits the potential damage that could be caused by a container compromise.
# Create a new user inside the container
RUN adduser --system appuser
# Run the container as the appuser
USER appuser
# Start the application
CMD ["node", "index.js"]
Conclusion
Docker is a powerful tool for containerizing and deploying applications. In this tutorial, we covered the basics of Docker, including installation, building containers, managing containers, working with images and registries, using Docker Compose, and best practices for using Docker in web development. By following these steps, you can easily containerize your application and deploy it to any server with Docker installed. With Docker, you can streamline your development and deployment process, making it faster and more efficient.