Photo by Ian Taylor on Unsplash
Docker Tutorial Series: Part 1
In this series for Docker, you will learn about basics and become a Docker rockstar.
Table of contents
Part 1 covers an introduction to Docker, and following is the outline for the rest of the series.
Introduction to Docker: Learn what Docker is and how it can be used to containerize applications.
Installing Docker: Understand how to install Docker on your operating system.
Docker Basics: Learn the basics of Docker, including Dockerfiles, images, and containers.
Creating a Docker Image: Walkthrough the process of creating a Docker image using a Dockerfile.
Running Docker Containers: Understand how to run Docker containers from your images.
Networking in Docker: Understand how Docker networking works and how to connect containers to networks.
Docker Compose: Learn how to use Docker Compose to manage multi-container applications.
Docker Swarm: Understand how to use Docker Swarm to orchestrate and manage container clusters.
Docker Best Practices: Learn the best practices for using Docker, including security considerations, using environment variables, and container optimization.
Deploying Applications with Docker: Understand how to deploy your applications using Docker, including using Docker with popular cloud providers.
Introduction to Docker
Docker is a platform that enables software developers to build, package, and run applications in containers. Containers are lightweight and portable environments that contain all the necessary dependencies and configurations required for an application to run, including libraries and binaries. Since its debut in March 2013, Docker has become the de facto standard for containerizing applications workloads. Docker has dramatically improved how we package and run our applications at scale. It works on the principle of build once and runs anywhere i.e., BORA. Think Java. Think WORA.
Let’s expand the principle BORA on which Docker runs. Let’s say two developers are working on the same application and trying to run the application locally so that they can do the development faster. Developer A gets the application up and running on their workstation and shares the steps with Developer B. While Developer B tries to follow the steps shared by Developer A, B cannot start the application. Why so? Only because Developer A forgot to add essential instructions, i.e., environment variables.
This problem is often called “it works on my machine,” but it doesn’t work for others.
So this is where Docker provides us with a guarantee that if your application is built using Docker, it will behave the same irrespective of the environment, i.e., dev/staging/prod.
Docker provides a simple and consistent way to create, manage, and deploy containers, making it a popular choice for many developers and organizations. With Docker, you can easily package your applications, share them with others, and deploy them to various platforms, from laptops and servers to cloud-based infrastructure.
Why Containers, though?
Containers are lightweight and portable, so you can quickly deploy them to different environments, including on-premises servers, public clouds, or other platforms. Containers also provide isolation, ensuring the application is protected from other applications running on the same host. This isolation approach can streamline the software development and deployment process and reduce the risk of configuration errors and conflicts. Finally, containers can be easily scaled up or down, making them ideal for applications that handle varying traffic levels.
Containers are more like boxes that can’t see anything outside their environment. Each container will have its IP address, hostname, and disk. Imagine running two different applications using different versions of Java, or they may be using an incompatible version of tools and libraries. This would be a nightmare on VMs but achievable with containers as they are, by default, isolated from each other. On VMs, we try to resolve these issues by running a single VM application, resulting in wasted resources.
As we see in the following image, we have underlying infrastructure at the bottom, which could be either a physical machine or a VM. Then on top of it, we will have an operating system layer. Then, a container engine is responsible for running containers on the host machine. Also, as you can see on the top, we have two different applications running inside two completely isolated containers.
Why Docker?
Since its debut in March 2013, Docker has become the de facto standard for containerizing applications workloads. According to this StackOverflow survey, Docker is rated #1 in most loved and #2 in the most wanted platform.
Using Docker to containerize applications provides several benefits. But what is it that is making developers love Docker? Let’s discuss this.
Cost savings
This is no brainer as Docker allows you to do more using your existing infrastructure. As compared to VMs, Docker containers use less amount of hardware resources. Return on investment is pretty high if you are using Docker containers.
Security
Docker ensures that applications are segregated and isolated from each other. This feature itself makes Docker a natural choice in an enterprise environment.
Easier Development
Using Docker, you don’t deal with the “work on my machine” problem as you get a consistent environment from your local to production. It is a significant productivity boost for developers as they can focus on coding and worry less about the infrastructure.
Docker images are versioned, so they can easily roll back to the previous image in case of issues.
Suitable for various use cases
We can use Docker for various scenarios. In a big organization, having one common dev environment for all the teams working with different tools and technologies is difficult. Still, with Docker, we can standardize the environments. You can have your infrastructure well defined in a Dockerfile and commit it to a code repository, and then developers can use the same to create their development environment.
For stateless Microservices, it’s natural to run them as containers. We can quickly deploy and scale them by better utilizing the existing hardware.
We all have to use third-party applications like PostgreSQL or Nginx to develop our applications. If these third-party applications are available as container images, it becomes easy to run them as they contain all the dependencies required. You don’t have to install them and go through their documentation manually. It saves a lot of time, and most of the time, a container image is available for tools such as Databases, web servers, etc.
Conclusion
In this part of the Docker learning series, you have learned about the benefits of Containers and Docker. In the next part, you will learn about Docker installation.