In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. Why do you need a container?
The following figure shows a more traditional software architecture:
Cdn.xitu.io/2019/9/4/16cfb4bb0a18c6fe?w=527&h=299&f=png&s=6986 ">
Students who have done java may have a good understanding of the architecture shown in the figure above. We usually generate a war package for an application, put it in a tomcat container and start running in a virtual machine (VM), and then configure nginx's load balancing policy to forward requests from users to a tomcat application. This kind of application based on host or virtual machine deployment will have the following problems:
Poor portability
You need to install the running environment required by the application in advance, such as jdk or jre for java applications. If you need to redeploy an application, you need to reinitialize the environment and then install the application, which is a tedious process. In addition, if one application needs the running environment of jdk7 and the other requires jdk8, it is difficult to meet on one host.
Poor maintainability
If there is a problem with the tomcat application itself or the virtual machine operating system, human intervention is required, such as configuring nginx forwarding rules, performing restart operations, etc.
Poor scalability
The load of the application is high and low, and it is not stable enough. When the current application load is large, we need to increase the number of applications. When the application load decreases, we need to reduce the number of applications.
Unable to isolate resources
If multiple applications are deployed in a virtual machine, different applications or processes will affect each other
...
Let's take a look at how we solve these problems step by step.
The first is containerization, and the solution we choose is Docker.
Docker packages the application's dependencies with the program into a container image, and running this file generates a virtual container. The program runs in this virtual container as if it were running on a real physical machine, and the resources of each container are isolated from each other and each has its own file system, so that the processes between the containers do not affect each other. You can use the following figure to compare the difference between virtual machine-based and container-based applications:
2. Docker introduces 2.1Docker architecture
Docker is an application of client-server architecture and consists of the following parts:
The server is a dockerd daemon that listens for REST API requests and manages Docker objects such as mirrors, containers, storage volumes, and networks.
The command line client (CLI), the docker command line that we usually enter on the console, controls or integrates with Docker daemon by calling REST API.
Image repository (Docker Registries), which is used to store Docker images.
The following is an architectural diagram of Docker:
2.2 Docker object
IMAGES
Images are generally read-only files created by instructions to generate containers. Generally, an image is created based on another image and some additional instructions are added. An image can be generated from a file called Dockerfile, and each line of instruction in Dockerfile generates a layer. When there are changes in Dockerfile that need to regenerate the image, you only need to regenerate the changed layers, which makes the image file lighter and faster.
CONTAINERS
A container is a running instance generated by an image file. You can create, start, stop, move, or delete a container through REST API or docker client.
SERVICE
To manage and extend multiple containers, you need to work with docker swarm
2.3 underlying technology
Docker is written in go language and uses several features in the Linux kernel to implement its functions, mainly as follows:
Namespaces
Docker provides isolated workspaces (Workspace) through Namespaces. When you run a container, Docker creates several different types of Namespaces for the container, mainly of the following types:
Pid namespace: provides process isolation
Net namespace: managing network interfaces
Ipc namespace: internal resource access control (IPC:Inter Process Communication)
Mnt namespace: managing file system mounts
Uts namespace: kernel isolation and version identification (UTS:Unix Timesharing System)
CGroups (Control Groups)
Docker uses CGroup to limit the container to specific resources. For example, Docker can limit how much cpu and memory resources a container can use.
UnionFS (Union File System)
A type of file system that can run on other file systems and make the container file system lighter and faster by creating different layers. There are several other similar file systems, including AUFS, btrfs, vfs, and DeviceMapper.
3. Installation and deployment of Docker
The following commands are commands on Centos7, and there will be some differences among other operating systems
Yum install docker: download docker-related dependent systemctl enable docker via yum: boot and run systemctl start docker: start the docker service
After performing the above operations, the docker service is already running, and you can view the version of docker and related information by executing the docker version and docker info commands.
4. The use of 4.1Dockerfile files for Docker
We mentioned earlier that Docker can package an application into an image, so how do you generate an image file? This requires the use of Dockerfile files. It is a text file that is used to configure the image from which Docker generates a binary image file. The following is an example of an Dockerfile file:
# the image file inherits the official nginx image. The colon indicates the label, where the label is latest, indicating that the latest version of FROM nginx:latest# will copy the file under the _ book directory to the / var/www/public directory of the image file COPY _ book / var/www/public/COPY nginx_app.conf/etc/nginx/conf.d/ nginx_app.conf# to expose port 8080 of the container. Allow external connections to this port EXPOSE 808 after the container starts, execute the nginx-g daemon off command CMD ["nginx", "- g", "daemon off" "] 4.2 create an image file
Once you have the Dockerfile file, you can use the docker build command to create the image file.
Docker build-t zcloud-document:0.0.1.docker image ls
If you run successfully, you can see the newly generated image file zcloud-document.
Generate Container # generate Container docker run-p 8080 docker tag zcloud-document:0.0.1 10.0.0.183:5000/zcloud/zcloud-document:0.0.1# 8080-it zcloud-document:0.0.1docker ps# regenerate a new image tag and point to the original image docker push 10.0.0.183:5000/zcloud/zcloud-document:0.0.1 and push it to the private image repository docker push 10.0.0.183:5000/zcloud/zcloud-document:0.0.1
You can consult some other operation commands of Docker by yourself. there are many articles introduced on the Internet. refer to the article: introduction to Docker (https://docs.docker.com/get-started/)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.