If we are using docker containers for personal use we can restart it manually using docker restart. The docker restart command allows us to restart containers with some time limit. But, in production, it is difficult to restart by manual. To restart containers automatically, we can use restart policies for docker containers.
Restart policies for docker containers
Restart polies allows the container to restart automatically in required situations. The situations may be when a failure occurs, when daemon starts, when a container stopped. The --restart flag is used with docker run command when starting a container. There are four restart policies available.
no
Do not automatically restart containers anyway
always
Always restart the container even though the container stopped manually. Until the daemon stops or the restart policy is changed, it will restart it in loop.
on-failure
Restart the container if it exits due to an error, which manifests as a non-zero exit code. If any container is stopped due to any fault, it will automatically restart those containers with this policy.
unless-stopped
Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.
To run a container with restart policies, try with following code pattern
docker run --restart no hello-world
The above command run the hello-world image with restart policy set as no. It will not restart the containers automatically.
docker update --restart always hello-world
Here I use update command to update the restart policy of the hello-world container with always policy. This will restart the container always even though it is stopped manually or start when the daemon starts.
To view the events that occurs during restart we can use event command.
docker event
Open the docker event in one shell and run the container with always policy with another shell. The event shows the hello-world container will restart automatically every times it stopped. The image shows that every time the hello-world container stopped, it restart it.
We can also restart the containers using process managers such as upstart, systemd, supervisor. I will be posting an article on using process managers for docker in later.
In docker we can connect two or more containers or non-docker system using networking. Whether it is a windows or Linux containers, docker network helps to connect them in an easy way. This may helpful for beginners who interested in docker.
There are different network drivers available in docker networking
bridge:
The default network driver if we don’t mention a network driver when running a container. It is created automatically when a container is created. It is best for standalone containers
host:
If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host, and the container does not get its own IP-address allocated.
For instance, if you run a container which binds to port 80 and you use host networking, the container’s application is available on port 80 on the host’s IP address.
overlay:
Overlay is mainly used when we want to connect multiple containers to create a swarm service to communicate between them.
macvlan:
Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses.
The other driver is called none, which is used when we use third party plugins from docker hub.
Bridge networking tutorial
This tutorial explain how a two standalone containers can be connected by bridge network. Here we create two alpine linux containers namely alpine1 and alphine2, then connect them using bridge.
We can view the network list using the command
docker network ls
You may see different network listed as bridge, host, none as following image. Any newly created network also listed here.
Next create two alpine containers as alpine 1 and alpine 2 with following command
docker run -dit --name alpine1 alpine
docker run -dit --name alpine2 alpine
The two container created as follows
Then inspect the bridge network to view details of the containers connected to it with the following command
docker network inspect bridge
This will display a JSON format of bridge network details. We could see that by default alpine1 and alpine2 has bridge networks with different IP address as 172.17.0.2/16 and 172.17.0.3/16 respectively .
Now connect to the alpine1 container using attach command
docker attach alpine1
Now attach to the container and ping a website say google.com -c 2 make 2 limit of ping.
Then try to ping the alpine2 container using the IP 172.17.0.3/16 and check they are connected. If it ping correctly then successfully two containers are connected using bridge.
ping -c 2 172.17.0.3/16
For other docker archive -> go.
For network reference of docker -> go.
When I was seeing some examples for docker file image building, I came across two things of same functionalities. They are COPY and ADD instructions. This article gives the difference between ADD and COPY in dockerfile. Next it explains how similar they are, then the best practice for using the RUN instead of ADD instruction.
ADD instruction
ADD instruction is an older one, which job is to copy the file or directory from a source to destination.
The ADD instruction can also do operations such as extraction of compressed files or download from an URL path.
Here the source may be a local compressed tar file or the URL path.
If it is a tar file, it extracts the contents to the destination else if it is an URL, then it download the file and extract to the destination in a similar way.
If authentication needs for URL file we can use RUN with curl or wget to download the files.
Since, ADD degrades in performance of the docker containers, COPY is been introduced for simple job. Syntax for ADD:
ADD<src> <dest>
A simple example to ADD used for local tar file called source.tar.xz to the destination folder /dest.
ADDsourcefile.tar.xz /dest
This syntax example gives how the URL path called https://example.com/source.tar.xz file is downloaded and copied to the /dest.
ADDhttps://example.com/source.tar.xz /dest
COPY instruction
COPY instruction copies the file or directory from a source path to the destination.
Its job is simple as it duplicates the source file or directory to the destination.
It doesn’t include the operation of extraction or downloading files as ADD instruction. Syntax for COPY:
COPY<src> <dest>
An example to COPY a file called source.txt to the destination folder /dest.
COPYsourcefile.txt /dest
What to use either COPY or ADD?
In dockerfile we use COPY many time because it only copies the file from a source to destination.ADD used when there is a purpose such as local tar files or URL file source.
For best docker practice, we can use RUN instruction instead of creating an extra layer. RUN instruction with curl or wget used to get the file direct into the destination.
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
For instance take the above sentence, where a file called source.tar.gz is downloaded using ADD and extracted using tar with RUN instruction.
This makes two layers that worse the docker performance.
It simply done using RUN with curl tool as follows within a single layer.
If, there is direct copy operation needed than COPY instruction is recommended.
The below dockerfile only takes single layer of operation of file download using curl and then extraction of file using tar with RUN instruction.
This article helps you to deploy a React application using docker containers. To know about docker and docker containers refer this article. This involves creating a react app using npx, then we use the docker file to deploy the docker containers using nginx images.
Create a React application
First, we need to create a React application. For this we can use npx create-react-app tool.
npx create-react-app react-docker-example
cd react-docker-example && npm install
npm start
Run the app in the localhost at http://localhost:3000/ or if any cloud such as AWS or Azure VMs with the 3000 port to check the app is running.
Create dockerfile
First, create a dockerfile in the React app folder with the following lines of dockerfile code.
# build stageFROM node:lts-alpine as build-stage
WORKDIR/app
COPYpackage*.json ./
RUNnpm install
COPY. .
RUNnpm run build
# production stageFROM nginx:stable-alpine as production-stage
COPY--from=build-stage /app/dist /usr/share/nginx/html
EXPOSE80
CMD ["nginx", "-g", "daemon off;"]
The FROM is the base image in which React app needed to run here. Here we use node:lts-alpine image to first build the artifacts for the React app, that it set as build-stage.
The WORKDIR is the dir where the app is build and run.
Then we copy package.json to the workdir /app.
Then run npm install to install dependencies and run npm run build to build the dependencies.
In production-stage we run the react application using nginx:stable-alpine image.
For this we need to copy the files to nginx/html folder, then we expose the port 80.
Build the docker image
Next step is to build the docker image using the docker file using the following command
docker build -t dockertutorial/react-example .
The -t is used for tag command here we named as react-example and we need to mention the folder here . used for selecting all. It take some time to build it and shows following result.
React app image build
Run the container
The last step is to run the container. docker run command is used to run the container from the image we created as following
$docker run -d -it -p 80:80 --name reactapp dockertutorial/react-example:latest
fd2285a0a2e37afc9d0317ce6c668d2c2bf23a71d75a9df8b8d7134c8d223573
-d runs the command in detached mode.
-p refers to port which expose container to local system, here it expose port 80 of container to port 80 of local machine.
-it runs in interactive mode with the tag name and --name refers to name of the container.
React app container run
Then go to http://localhost:80 or if you use cloud VM use the public IP address and select the port 80. It gives the following results
React app
This react application can also be run with multiple containers using docker compose. This will be our next article.
Docker is an open source platform to develop, ship and run an application in an isolated way using docker containers. Container is an unit of software that packages all dependencies such as binaries and libraries required to run an application. Containers are not new technologies, since from 1979 used in UNIX system. The below article give a detailed info about container evolution from dzone. The Evolution of Linux Containers and Their Future Docker containers isolate from each other but share the same Linux kernel system. This uses the kernel namespaces and cgroups to manage application. Docker containers can deployed in separate and managed as usual application.
Docker containers vs Virtual Machines
The Docker also use virtualization as Virtual Machines. But it uses OS level virtualization, whereas VM uses hardware level virtualization. Docker shares the Operating system kernel, VM shares the hardware. This makes docker more efficient than VM. Consider the following picture which illustrates the difference between Docker and VM.
In VM architecture, it uses the Hypervisor to manage the Guest OS on it. Each Guest OS has different binaries and libraries for running software applications. In docker, it shares the Linux kernel to run all applications with its own dependencies.
These dependencies packaged as image called docker images. Docker images deployed as containers to run the application in an isolated environment. These containers are run on docker engine.
Docker architecture
Docker Images
Images are simple templates that instructs to create the containers. Each image even built from another images itself. These images can be build using the dockerfile. dockerfile is a file which instructs to build an image using other images and dependencies. Even the syntax for writing the dockerfile is simple language. Each line in the dockerfile creates a layer to run a container. The registries such as docker registry, cloud registries include docker hub, Amazon ECR, Azure container registry.
Docker Containers
Container are the runnable instances using the docker images. It packages all the bins and libraries needed to run the application in an isolated environment. Docker CLI and Docker API can used to create, delete, start, stop containers using the Docker daemon. Each container has its own networks, storage, and configs. Container can connect to another container using its networks . When a container deleted, the storage in the container also removed. Here we can use the persistent storage for storing the data instead of containers.
Docker daemon
Docker daemon(dockerd) uses Docker API manages the all docker objects in the docker environment. These objects are containers, images, storage, networks. The docker daemon communicates with other daemons for container management and network communication.
Docker client
Docker client(docker) is the interactive way we can execute the API to manage docker system. To manage the docker daemon the docker API called using docker command.
Registry
Docker images are stored in the registries. Docker hub is a public registry, we can create and push the image in this docker hub. docker pull, docker push commands used to pull and push docker images from the registry. Cloud technologies also provide registries such as Amazon ECR by AWS, Azure container registry by Azure cloud.
Docker installation
Docker can be installed natively in Linux and Mac OS since it uses linux namespaces and cgroups for working. In windows, docker can be installed with some dependencies. Docker desktop, a GUI based docker tool is available for Windows and Mac OS The following link give the documentation for installing docker in Mac OS, Windows and Linux distros.
Running our first docker container
First, check the docker is installed on your system using following command
$ docker --version
Docker version 19.03.8, build afacb8b7f0
Now, we run our first container hello-world image. If image is not present in our local system, it pulls those image from docker hub and runs it.
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:ca0eeb6fb05351dfc8759c20733c91def84cb8007aa89a5bf606bc8b315b9fc7
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To list the container running in the system use ps command.
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS
54f4984ed6a8 hello-world "/hello" 20 seconds ago Exited (0) 19 seconds ago
To list the images pulled use
$ docker image ls
The next post will be about building a docker image.