Docker 101

Glenda Emanuella Sutanto
6 min readMay 2, 2021

--

Source: twitter.com/Docker

This is written to fulfill individual review criteria for PPL Fasilkom UI 2021.

Have you ever seen this picture? What do you think is the biggest? Despite an abundant amount of containers in the real ship, it is still not the biggest yet. According to a Twitter user @Docker, the biggest one is the whale picture with containers above their head instead. What is that and why is it the biggest? And what are those containers mean? Read more to find out.

Docker

Docker is a container-based framework that makes it easier to create, deploy, and run applications. Containers allow a developer to bundle an application’s components, including libraries and other dependencies, and deploy it as a single package. It can be assured that the program will run on any other Linux machine, regardless of any customized settings that the device might have.

In some aspects, Docker is similar to a virtual machine. Unlike a virtual machine, Docker allows programs to run on the same Linux kernel as the device they’re on and only demands that applications be shipped with things that aren’t already running on the host computer. This increases efficiency while also reducing the size of the program.

Docker is both open source and free. This means that everyone can contribute to Docker and customize it to meet their own requirements if they need features that aren’t currently available.

Container

A container is a standard unit of software that packages code and all of its dependencies so that it can be quickly and securely transferred from one computing environment to another. A Docker container image is a lightweight, self-contained software package that includes all of the components needed to run an application, including code, runtime, system resources, system libraries, and settings.

Docker container is available for both Linux and Windows-based applications. Regardless of the infrastructure, containerized software will still be running. Containers keep software apart from its environment and ensure that it runs properly regardless of differences.

Why Docker

Consistent and Isolated Environment
Developers can build predictable environments that are separated from other apps by using containers. All stays constant regardless of where the software is deployed, resulting in massive productivity.

Cost-effectiveness with Fast Deployment
Containers operated by Docker are known for cutting deployment time to seconds. By every measure, that’s a remarkable achievement. Stuff like provisioning and getting the hardware up and running used to take days or even weeks.

Mobility — Ability to Run Anywhere
Docker images are free of environmental constraints, allowing for consistent, movable (portable), and scalable deployments. Containers also have the advantage of being able to run on any OS (Windows, Mac OS, Linux, VMs, On-prem, in the Public Cloud), which is a major benefit for both growth and deployment.

Docker Implementation Example

To use docker to build your own container for your application, here I will be giving steps of the docker implementation example for my own project that I’ve been working on for PPL 2021, Dietela.

The first step is to create a Dockerfile. A Dockerfile is a file that specifies how a Docker image should be built. Actually, we can use the official images that Docker has provided to us in DockerHub, but with Dockerfile, we can customize the docker image by ourselves. Here is an example of my project Dockerfile.

The details of each command are described as follows:

  • FROM — defines the base image used to start the build process
  • ENV — sets environment variables
  • RUN — central executing directive for Dockerfiles
  • WORKDIR — sets the path where the command, defined with CMD, is to be executed
  • ADD — copies the files from a source on the host into the container’s own filesystem at the set destination

Therefore, we know that a Python 3 parent picture is the starting point for this Dockerfile. Later, a new code directory is added to the parent image, which modifies it. The parent image is further modified by installing the Python requirements defined in the requirements.txt file.

After that, create your requirements.txt file if you don’t have one yet. Requirements.txt file in a Django project is a file that contains all your libraries/dependencies needed to build your Django project.

Next, create a docker-compose.yml file.

The services that make up your app are defined in the docker-compose.yml file. For this project, we need to make a web server and database service. The compose file also specifies which Docker images these services use, how they interact with one another, and any volumes that may be required to be installed within the containers. Finally, the docker-compose.yml file specifies which services expose which ports. For more information about how this file operates, see this docker-compose.yml reference.

The variables DATABASE_NAME, DATABASE_USER, and DATABASE_PASSWORD can be set in your .env file to increase security.

createsu is a customized command to create the superuser for our Django project. To do this, we can write the code in management/commands/createsu.

We have finally reached the last step. Now, go to your settings.py in your project and fill theDATABASES = … with following:

These settings are determined by the postgres Docker image specified in docker-compose.yml.

Finally, to run your container, run docker-compose up command from the top-level directory for your project.

After your container is finally running, you maybe want to know how to access your database or write some commands for your Django project as usual. To do this, you have to “get in” to your container first.

First of all, you have to know the names of your actively running container by typing docker ps in your terminal.

Here, you can see the details of your containers. The names of the container can be seen in the NAMES column.

For example, we want to “get in” to our database system. In this example, the name of the container is dietela-backend_db_1.

Thus, to get in, we have to type this command:
docker exec -it <container_name> bash
docker exec -it dietela-backend_db_1 bash

After we get in into the container, we can access the PostgreSQL server by typing psql -U postgres -d dietela_backend, as usual.

And voila! We can see our PostgreSQL server for our project database there!

Similarly, we can get into the web container to access our Django project by typing docker exec -it dietela-backend_web_1 bash

Because I use Linux-based containers, I can type any Linux commands there such as cat, ls, etc.

We can also type commands that are related to Python, such as creating django app, creating a superuser, etc.

And of course, we can go into localhost:8000/admin and log in with our newly-made superuser!

Very interesting, right? :D

That’s all for today’s Docker article. I hope you gain something new here :) Thank you for reading!

Reference:

  1. Reference 1
  2. Reference 2
  3. Reference 3
  4. Reference 4
  5. Reference 5
  6. Reference 6
  7. Reference 7

--

--

Glenda Emanuella Sutanto
Glenda Emanuella Sutanto

No responses yet