What The Docker
Docker is something that has changed the way we see services and their management. The concept of Container Orchestration has not just become popular but slowly and surely it is becoming a standard across the industry over the years. We have come a long way from application servers running services as a managed service or a web service.
However, containers are completely the next level of abstraction from separating all dependencies and making services more independent and portable to allow you to recreate the same service without any issues and within seconds.
Docker has become a defacto containerization standard on platforms like ECS, Kubernetes(open source, AKS, EKS, GKS, etc), so it becomes ever so important to know it. Though we may not totally go into all the details of docker we will know enough to understand what it is.
Note: There are people who have already created good content on the topic for the community, this is just a small attempt to contribute to it.
What is Docker?
Docker is a containerization platform that packages your application and all its dependencies together in the form of a container {literally imagine a container that has your application packaged in it (: }. It abstracts the application in such a way that it works seamlessly in any environment be it development, test, production, or any system.
Docker containers, wrap a piece of software in a separate filesystem that contains everything needed to run: code, things such as runtime env, system tools, and libraries, etc, anything that is needed for the app can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
Container: A container is an instance of the image and we can create multiple containers of the same Docker Image. Docker image is the source of the Docker container.
To better understand docker and containers let’s compare it with VM’s
To further clarify containers (because I love visualization...)
The Component Docker in the above diagram is a docker engine that you need to install on the system if you want to build, run, or access docker. It comes in various flavors for Mac, Linux, and Windows. However, the underlying system it natively runs on is Linux. We can also run docker-engine natively on windows, but it will only run windows containers.
The Container is something that runs on your system and it can be an independent service for eg: Retail/E-Commerce website that processes payments on each purchase and for this purpose we need an application that does this processing job for us, in this case, we have something called payment service which will be called multiple times during peak and business hours and days, we need something which is very stable and repeatable for scalability purposes and that’s when things like docker containers come-in.
Now when this container is built, all the dependencies of the payment service will be preconfigured with something called as docker image.
Docker Build
Images are created with the build command, and they’ll produce a container when started with the run command. We will talk about run command later, first, we will talk about build command.
Now, what is this build command, and on what component this build command is run to create the docker image?.
- Build is run on a Dockerfile and other related components to the service and it lives in the same place where your code(app/service) lives i.e GitHub repository.
Interesting Point to note: When you build the docker image, it builds and removes containers while executing through each and every instruction in Dockerfile. Notice this while running the build of your image.
So when we said it packages all dependencies together, that literally translates to putting all components needed by your service in the Dockerfile and place it inside the code repository, and then package(build) the app as your service and push it to the registry(home where docker image live). The registry can be docker hub, Jfrogs Artifactory, ECR and ACR, etc.
LifeCycle of Docker: Before I go into more details and fire barrage of text, here is a diagram that I sourced from the internet to calm your mind and make all this look simpler :).
The below screenshot gives an example(server app) of Dockerfile been placed with a NodeJs App, similarly, it can be java, scala, .net, python application.
The next question would be, Ok, I see it is my code and dockerfile, but…Dockerfile?
DockerFile: Writing a Docker file is like having a computer that does not have an OS but requires an s/w to be installed on it.
Breakdown of the docker file
#FROM ==> Specifies the image that you want to download according to your default s/w requirement on the image.
This could have easily been a Windows or Linux image. We are using an alpine, I purposely chose that image to highlight the point that it suited the requirement of what I was building.
I like alpine because it is a very small image (up to 5MB) and takes very little memory but also comes with limited functionality of what all commands or packages we can set up within it.
There are ways and means of getting around the limitations of alpine and take advantage of its lean feature and that’s what we do in our example above by downloading an image of alpine with node.
FROM node: alpine
#Reason we specify node: alpine after FROM is because ‘npm’ is not available in our base image of alpine and we need a type of alpine image that will come with a basic version of node with it.
Refer to the link below for public/open-source images available for node-based alpine images.
#Location of the app that we are setting up and where our web-app related files will live in the container. This will allow the isolation and spacing out of files from the base image.
WORKDIR: “/app”
#Copy the over package.json file that will have details about all the required to install all the dependencies specified in package.json for our Node App
COPY : package.json ./
#Install npm needed by the app to run.
RUN: npm install
#Copy over your project files to the container.
COPY: ‘. .’
The first ‘.’ is the path to the folder from your machine relative to build context i.e current working directory.
The second ‘.’ path/location of the file on the container
#Default command to be executed once the container is created.
CMD [“npm”, “run”, “start”]
To further simplify the lifecycle image shown above, refer to the workflow image below for single container.
What is a Docker Registry/Repository:
Docker Images are stored in the docker registry. A registry is a place where Docker images are stored or pushed after the build. In the future, we can pull these docker images as needed.
registry.hub.docker.com or hub.docker.com. However, they can be stored in any other registry that follows certain protocols to store images. Eg, JFrog Artifactory, ECR(AWS), and ACR(Azure).
The configuration you specify for the service is going to be exclusive to it. Example CPU, Memory, any dependent s/w or libraries, or for that matter certificate that might be needed by the service to function properly and securely. I personally call it the Plug and Play model of docker.
Docker container has it’s own file system based on the type of image you selected for the build, however, it will look more or less like your OS.
From inside a docker container may look something similar to what is described in the below screenshot, it is like a defacto VM provided to your application on an exclusive basis, however, it is not a VM. It just has the necessary components needed by your service to run. It does give you a feel of running your application in its own OS or system and allows you navigation over the directories.
more about File System…
Docker images(linux native) comes with a default file system snapshot and has a directory structure like /bin /dev/ etc /home /proc /root.
This file system structure gives you some default features like the option to execute ‘ls’, ‘cd’, ‘touch’, ‘head’, ‘tail’, and other basic Linux based commands to navigate and work through your container, and when you create a container you can also create your own dir and place your application and docker level configs in it. To top that the container runs your application within its ecosystem.
But the question is how are containers created…
Docker Run: docker run <image name>
We can also run commands when they are created for example
docker run <image name><command>
eg: docker run httpd ls
here “httpd” is the image name and the content that is listed using ls
i.e /bin /build /cgi-bin /conf etc are taken as folders for the container we create. Now ‘ls’ or ‘echo’ type of commands will work here because these will exists or be supported in the ‘httpd’ image an.
It first checks the local cache for the image specified if does not find it, it tries to contact the docker hub or registry to pull the image.
Once the Image is found it downloads the image and runs the process i.e create a container-based on the image
docker ps: This command list all running containers on your local or remote machine. It will list the details of all the running containers and their details as a quick lookup.
docker ps also gives you the id of the container i.e Container ID
you can log in to your container’s shell using the following command
docker exec -it <container id> bash
Docker Images: to list all the docker images in the local cache, execute the below command.
docker images
Docker Command to lists all the containers that are running/stopped/exited
docker ps — all #two dashes —
Pulling the existing image from the registry
docker pull <image name>
docker pull redis
I wanted to avoid docker run and use docker create and start to showcase their usage.
docker run = docker create + docker start
docker create: docker create <image name>: This command creates a container and outputs a container ID.
docker start: docker start <container id>: This command starts a container based on the Container ID. This command is also used to start the stopped containers.
docker stop <containername/container id> : Stops the running container. H/W signal is sent i.e SIGTERM message inside the container, stop a process, and shut itself down gracefully.
docker kill <containername/container id>: Issues SIGKILL message inside the container, does not allow you any chance to shutdown any running process, and shuts down immediately.
docker rm <containername/container id>: remove the docker container
docker system prune: will remove all stopped containers and things like build-cache which might force you to redownloaded images you previously downloaded. This does not affect running containers
docker logs <container name/container id>: This is the command that is used to inspect the container and check what is going inside it. It is very useful while debugging issues with the container.
docker exec -it <containername/container id> <command to execute on start/shell>
In the below screenshot you can see that I have a Redis image that I list using the ‘docker ps’ command now in the next step I am executing a command:
docker exec -it aa2b12881dc6 redis-cli
aa2b12881dc6 ==> redis container id
redis-cli ==> command to enter redis cli
This command allows me to enter the docker container in the interactive mode, in our case we are able to start redis-cli and execute the redis commands and set and get values.
what is ‘-it’ option : when we execute the command we attach our terminal to STDIN of redis-cli, so anything that we type will be executed in redis-cli. ‘it’ flag just makes the output looks pretty and provides some features of the terminal like formatting and interactive.
Shell access or terminal access to a running container.
docker exec -it aa2b12881dc6 bash
aa2b12881dc6 ==> container id
we can use ‘bash’, ‘sh’, ‘powershell’,‘zsh’ shells
Docker ensures container isolations and even if I create two containers of the same Redis image, it will keep the container and its executions and working separately to ensure they operate independently of each other, which is the whole objective of docker.
For getting started with dockerizing an app refer to a simple docker example from my public GitHub repository. Go ahead and download and code and further customize if you want and then build and tag it, run it and push it to your docker repository.
You can run your Docker container in AWS, Azure, GCP, Docker Swarm, Kubernetes, etc. however our example caters to getting it up on the local system.
- Tag your Docker container (svarpe is my docker id, you can use your own)
docker build — t svarpe/pythondocker . ( — is two dashes before tag option and ‘.’ is the relative context path where all files reside)
2. Run the docker container
docker run — name python-web-app -p 5000:5000 python-demo-app ( — name is two dashes before option name)
3. Push it to the docker repository(instead of svarpe you can you use your id)
docker push svarpe/pythondocker:latest
OR
You can also pull the docker image I created and directly run it
https://hub.docker.com/repository/docker/svarpe/pythondocker
docker pull svarpe/pythondocker:latest
Run and test the app
From Browser
Test via curl or wget commands
This is of course not the end of what docker is, but a start towards a new journey…There are lot more topics to talk about which we can leave for later as it might lengthen the article further.
I will leave you with a statement that was quite popular when we first used docker and I am sure a lot of people can correlate.
Reference Links