In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
I want to start Docker with this article, the following content is from Docker official documentation, there should be no more vivid description of Docker than the official documentation.
Docker Overview
Docker is an open platform for developing, publishing and running applications. Docker enables you to separate applications from infrastructure for fast software delivery. With Docker, you can manage your infrastructure just like you manage apps. By leveraging Docker's approach to quickly releasing, testing, and deploying code, you can significantly reduce the latency between writing code and running it in production.
Docker platform
Docker provides the ability to package and run applications in containers. Isolation and security allow you to run multiple containers simultaneously on a given host. Containers are lightweight because they do not require additional hypervisors to be loaded, but run directly in the kernel of the host. This means you can run more containers on a given hardware combination than you would with virtual machines. You can even run Docker containers on hosts that are actually virtual machines!
Docker provides tools and platforms to manage the lifecycle of containers:
Develop applications and their supporting components using containers.
Containers become the unit for distributing and testing applications.
When ready, deploy the application as a container or orchestrated service into production. It's the same whether your production environment is an on-premises data center, cloud provider, or a hybrid of both.
Docker Engine
Docker Engine is a client-server application with the following main components:
A server for long-running programs called dockerd.
REST API, which specifies interfaces to tell applications how to interact with daemons.
Command line interface (CLI) client (docker command), as shown in Figure 1.1.
Figure 1.1
CLI uses Docker REST APIs to control or interact with Docker daemons through scripts or direct CLI commands. Many other Docker applications use the underlying API and CLI.
Use daemons to create and manage Docker objects such as mirrors, containers, networks, and volumes.
Note: Docker is licensed under the Apache 2.0 Open Source License.
See Docker architecture below for more details.
What can I do with Docker?
Deliver your applications quickly and consistently.
Docker simplifies the development cycle by allowing developers to use native containers (which provide applications and services) in a standardized environment. Containers are ideal for continuous integration and continuous delivery (CI/CD) workflows.
Consider the following example scenario:
Developers write code locally and share their work with colleagues using Docker containers.
They use Docker to push applications into test environments and perform automated and manual testing.
When developers discover bugs, they can fix them in the development environment and redeploy them to the test environment for testing and validation.
Once testing is complete, pushing the updated image into production makes it easy to get fixes for customers.
Responsive deployment and scaling
Docker-based containerization platforms allow for highly portable workloads. Docker containers can run on developers 'on-premises laptops, physical or virtual machines in data centers, cloud providers, or hybrid environments.
Docker's portability and lightweight features also make it easy to dynamically manage workloads and scale applications and services according to business needs, almost in real time.
Run more workloads on the same hardware
Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines, so you can use more compute power to achieve your business goals. Docker is ideal for high-density environments and small to medium-sized deployments where you need to do more with less.
Docker architecture
Docker uses a C-S architecture. Docker clients communicate with Docker daemons, which are responsible for building, running, and distributing Docker containers. Docker clients and daemons can run on the same system, or Docker clients can be connected to remote Docker daemons. Docker clients and daemons use REST APIs to communicate via UNIX sockets or network interfaces, as shown in Figure 1.2.
Figure 1.2
Docker daemon
Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. Daemons can also communicate with other daemons to manage Docker services.
Docker client
Docker client (docker) is the primary way many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which executes them. The docker command uses the docker API. Docker clients can communicate with multiple docker daemons.
Docker registry
Docker registry is used to store Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured by default to find images on Docker Hub. You can run your own private registry. If you use Docker Data Center (DDC), it includes Docker Trusted Registry (DTR).
When you use docker pull or docker run, you extract the desired image from the configured registry. When the docker push command is used, the image is pushed to the configured registry.
Docker objects
When you use Docker, you may be creating and using images, containers, networks, volumes, plugins, and other objects.
mirror image
An image is a read-only template that contains instructions for creating Docker containers. Typically, one mirror is based on another mirror, with some additional customizations. For example, you can build an image that is based on ubuntu, install the Apache web server and your application, and do detailed configuration to make sure the application works.
You can also create your own images or just use images created by others and published in the docker hub registry. To build your own image, you need to create a Dockerfile that contains some simple syntax that defines the steps required to create the image and run it. Each instruction in Dockerfile creates a layer in the mirror. When you change Dockerfile and rebuild the mirror, only the changed layers are rebuilt. This is one of the reasons mirroring is so lightweight, small, and fast compared to other virtualization technologies.
container
A container is a runnable instance of a mirror. You can create, start, stop, move, or delete containers using Docker API or CLI. You can connect a container to one or more networks, attach storage to that container, and even create a new mirror based on its current state.
By default, containers are relatively well isolated from other containers and their hosts. You can control how isolated a container's network, storage, or other underlying subsystems are from other containers or hosts.
A container is defined by its image and any configuration options provided to it at creation or startup. When a container is deleted, any changes to its state that are not stored in persistent storage are lost, i.e., we plan to ensure that useful data is placed on persistent storage when creating containers.
docker run command example
The following command will run an ubuntu container, interactively connect to the local command-line session, and run/bin/bash.
When you run this command, the following occurs (assuming you are using the default registry configuration):
# docker run -i -t ubuntu /bin/bash
If you don't have an ubuntu image locally, Docker will look for it from your configured registry, just like running Docker pull ubuntu manually.
Docker creates a new container, just as if you had manually run Docker container create.
Docker allocates a read-write file system for containers as the last layer of containers. Allows you to create or modify files and directories on a running container.
Docker creates a network interface, and if you do not specify any network options, the container will connect to the default network. This includes assigning IP addresses to containers. By default, containers can connect to external networks using the host's network connection.
Docker starts the container and executes/bin/bash. The container will run interactively and connect to your terminal.
When you type exit, it will terminate/bin/bash, and the container will stop, but it will not be deleted. You can restart or delete it.
service
Services allow you to scale containers across multiple Docker daemons, all of which work with multiple admin and worker processes. Each member of the swarm is a Docker daemon, and all daemons communicate using Docker APIs. Services allow you to define desired states, such as the number of copies of the service that must be available at any given time. By default, services are load balanced across all worker nodes. To consumers, Docker services seem to be a separate app. Docker engine support for swarm mode requires Docker version 1.12 and higher.
basic technology
Docker is written in Go and leverages some of the advanced features of the Linux kernel to provide functionality.
Namespaces
Docker uses namespaces technology to provide containers with separate workspaces. When you run a container, Docker creates a set of namespaces for that container.
These namespaces provide a layer of isolation, and each resource of the container runs in a separate namespace, with access limited to that namespace.
Docker Engine uses the following namespaces on Linux:
pid namespace: Process isolation (pid:Process ID).
Net namespace: Manage network interfaces (net:Networking).
ipc namespace: Manage access to ipc resources (ipc:InterProcess Communication).
mnt namespace: manages file system mount points (mnt:mount).
Uts namespace: isolates kernel and version identifiers.(UTS: Unix Timesharing System)。
Control groups
Docker engine running on Linux also relies on another technology called cgroups. cgroup restricts an application to a specific set of resources. cgroups allow Docker engines to share available hardware resources with containers and optionally enforce restrictions and constraints. For example, you can limit the memory available for a given container.
Union file systems
Federated file systems (UnionFS) are file systems that operate by creating layers, which makes them very lightweight and fast. Docker engine uses UnionFS to provide building blocks for containers. Docker engines can use several UnionFS variants, including AUFS, btrfs, vfs, and DeviceMapper.
Container format
Docker engine combines namespaces, control groups, and UnionFS into a package called container format. The default container format is libcontainer. In the future, Docker may support other container formats by integrating with technologies such as BSD Jails or Solaris Zones.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.