What Exactly Is Docker?
Whenever we have to install software, we have to take care of a lot of things. There are a lot of different versions of software available for different operating systems and their different versions. You have to go through the documentation and choose the correct fit for your needs and then run the executive file. Even after that, you may need to complete some other steps before you are able to use that software. Docker runs containers which contain the software plus the other additional things that the software needs to run. So, this means you just use a ‘
docker run’ command with the name of the image that you want to install, and voila, your software runs in its own container, using its own resources. You do not have to worry which version of the software suits your operating system, etc. I will demonstrate this with an example of MongoDB installation.
What Is the Importance of Docker for Me as a Developer in It?
Well, developers can simply write their code, and create an image. This image will contain all the tools needed for the application to run. This image simply needs to be deployed on a production machine which has no prior software installed and the application will run exactly as in the development machine.
So How Exactly Do We Use Docker?
Once we have installed docker on our systems, we go to Docker Hub or some other registry. Search for the software that you want to install. You can then run a PowerShell command ‘
docker run imageName’ and the software is ready for our use.
What's the Difference Between Containers and Dockers?
Yes, these two terms are used interchangeably a lot. But they mean different things. Containers are self-contained processes which include the running software along with its dependencies, etc. Containers have existed in Linux for a long time, but they were not much used then. According to Docker's official website – Docker is ‘a platform for developers and sysadmins to develop, deploy, and run applications with containers’. So to sum up, dockers help us maintain containers and containers are processes that run applications.
Advantages of Containers
As mentioned above, running as containers simplifies the process of running software and applications. Suppose you have an ASP.NET application. A developer can create an image of the working application. This image will contain the application, ASP.NET Framework, etc. Now this image can be deployed as a container on a prod machine that needs no other prior things installed. Whatever is needed for the application to run will be present in the container. The container will run the same on all the systems. So you will no longer have issues like an application is running on dev but failing on Prod.
Containers and Virtual Machines
Containers and virtual machines might look the same, but they are quite different. Containers will contain only the tools that the application needs and it will share the host operating system kernel with other containers. But virtual machines, on the other hand, will have their own fully independent operating systems. Since containers do not have their own full-fledged operating system, they are lighter than virtual machines.
The docker official website explains a Docker engine with the below diagram:
A docker engine consists of a client and a server. We users interact with the server using the docker CLI which is also the client. The client interacts with the server through the docker Rest API. The server or docker daemon is responsible for running the containers. When the user types in a command from the docker CLI, for example - the ‘
docker run imagename’ command, the request is received by the docker daemon. The daemon will search for the image locally, and if found, it will run it as a container. Think of an image as an executive file. If the image is not found locally, the daemon will search for it in a registry and then run it as a container.
Now, let’s start exploring docker practically. You need Windows 10 Professional or Enterprise version with at least 4 GB RAM to install Docker. Since I didn’t have Windows 10 Professional, I created a virtual machine in Azure. Here are the steps:
Go to the Azure portal and click on the virtual machine.
Choose a Windows 10 professional machine. Not all VMs support nested virtualization. So, I went and selected a VM of size D2s_v3. Selecting a virtual machine with a size that supports nested virtualization is important to run Docker.
Also, make sure that in inbound and outbound port rules, all connections are allowed through RDP; else, you might not be able to connect through RDP.
If you are trying to access Azure from your office you might run into issues. You might need to contact your system administrator to open up these ports. Once our VM is up and running, we need to install Docker. Go here to install Docker for Windows.
Once you have installed the above software, your system will restart and Docker will ask you to enable Hyper-V. Click "Yes, restart the system".
By default, Linux containers will be enabled. You can switch containers by clicking on the whale icon as below.
Go to PowerShell and type ‘
docker run hello-world‘ and press Enter. You should see a message ‘
Hello from Docker’ which means your docker is installed correctly.
Read the steps that are mentioned in the screenshot above. This is what we had talked about earlier.
It is possible that if you are trying to run Docker from your workplace, you might face some proxy related issues. You can set your proxy by navigating to settings.
Demo: Running Your ASP.NET Application as a Container
Create a new ASP.NET MVC Core project in Visual Studio 2017. While creating, make sure you have the ‘Enable Docker Support’ option checked.
I call my app ‘
aspnetapp’. Once you enable docker support, a file called dockerfile is created in the Solution Explorer.
Replace the existing code in this file with the following piece of code:
1 FROM microsoft/dotnet:sdk AS build-env
2 WORKDIR /app
4 # Copy csproj and restore as distinct layers
5 COPY *.csproj ./
6 RUN dotnet restore
8 # Copy everything else and build
9 COPY . ./
10 RUN dotnet publish -c Release -o out
12 # Build runtime image
13 FROM microsoft/dotnet:aspnetcore-runtime
14 WORKDIR /app
15 COPY --from=build-env /app/out .
16 ENTRYPOINT ["dotnet", "aspnetapp.dll"]
Goto PowerShell and navigate to the project directory. Once there, run the command:
docker build -t aspnetapp .
Once the project is built, run the following command:
docker run -d -p 8080:80 --name myaspnetapp aspnetapp
Once this is successful, go to localhost:8080 to navigate the app:
So what happened here? The dockerfile gave information which is needed for creating an image. For example, it says that the image should be created with the base image as Microsoft/dotnet:aspnetcore. An image for application is created when you run the build command. If you write the following command in PowerShell, you see the images listed.
When you run the image using the ‘
docker run’ command, it runs this image as a container where ‘
myaspnetapp’ is the container name and ‘
aspnetapp’ is the image name. The
run command instructs it to run on port 8080. So, when you navigate to
localhost:8080, you can find your containerized application running. You can check out all the running containers using the commander ‘
For further information on this demo, refer to the official docker website here.
So, developers can create their image and upload it to a repository. This image can then be simply run on production machines when the application needs to be made live.
Now let’s check out, how we can run MongoDB as a container.
Demo: Installing MongoDB
The Traditional Way of Installing Software
Now let’s install Mongo using traditional methods. If we head over to its documentation, it will list the steps that are required for installing MongoDB including running the executive, setting it up through the installer, etc. Installing MongoDB is a lengthy process.
Now let’s see how docker simplifies this process.
Running Software as a Container
Go to Docker Hub and search for Mongo.
Before we run the command, click on the whale icon, go to settings -> Daemon and set the experimental flag as true.
Once you are done, docker will restart. You can then type in the following command in PowerShell or command prompt.
docker run --name some-mongo -d mongo:4.1
Here, the docker installs a container with the name ‘
some-mongo’. You can give some other name as you please. ‘
Mongo’ is the image name and ‘
4.1’ is its version or tag.
It says that a newer image has been downloaded.
Let’s use the following command to run the downloaded image:
docker run some-mongo
We will get a message ‘
waiting for connections on port 27017’ which means our server is up and running.
So open another instance of PowerShell and run the following command:
docker exec -it some-mongo mongo
Then type in the command:
This shows that there are no databases created yet in our server. We can now proceed with other mongo db commands here.
So we see docker simplifies the process of installing the software.
What Happens Behind the Screens?
Your operating system can be divided into two major portions: Kernel & Userspace. The kernel has control over the hardware and consists of drivers, etc. Everything other than the kernel like our applications, OS apps, and libraries, etc. that are required by this application fall under user space. User space accesses the hardware through the kernel.
Traditionally, when we install software, we simply install the application and use the drivers and the libraries already present in the user space. But now with the containerization approach, when an image is created, it will contain the application plus other drivers required for it to run. Hence, an application will be independent of the resources that the operating system provides.
docker run imagename
This will run the specified image. This is equivalent to running an executive in traditional software.
This will list down all the docker commands available to you.
This will list down the currently running containers. Currently, if we type this command in our PowerShell, we get the following output:
docker ps -a
This will list all the running and the exited containers.
docker stop containername
This will stop the software. ‘
docker ps -a’ will list the container as stopped, but ‘
docker ps’ will not list it.
docker rm containername
This will remove the container. This is like uninstalling software in the traditional sense. Both ‘
docker ps -a’ & ‘
docker ps’ will not list it, since the container is removed.
This will list the downloaded images. Images are like the executives as far as traditional methods of installing software are concerned.
For more docker commands, visit here.
How to Upload Image to Docker Hub?
In the first demo above, we created an image for ASP.NET core which was stored locally. We will now have a look at how to upload images on Docker hub. First, you need to create a free account on docker hub. Then create a repository.
Login from powershell as below:
Then run the following commands:
In the docker tag command,
aspnetapp is the image name. Then comes my
Once this image is pushed on dockerhub, you can login there and you will be able to see the image in a browser.
I hope that this article has taken you one step closer to unraveling the mystery of Docker. Feel free to reach out to me in case you want to discuss further.
This article was originally published on my website.