In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Introduction to 1.Docker
1.1 Virtualization
1.1.1 what is Virtualization
In computer, virtualization (English: Virtualization) is a kind of resource management technology, which abstracts and transforms all kinds of physical resources of computer, such as server, network, memory and storage, so that users can apply these resources in a better way than the original configuration. The new virtual parts of these resources are not limited by the erection, location, or physical configuration of existing resources. Generally referred to as virtualization resources include computing power and data storage.
In the actual production environment, virtualization technology is mainly used to solve the overcapacity of high-performance physical hardware and the reorganization and reuse of old and old hardware with too low capacity, and to transparent the underlying physical hardware. so as to maximize the use of physical hardware to make full use of resources there are many kinds of virtualization technologies. For example: software virtualization, hardware virtualization, memory virtualization, network virtualization (vip), desktop virtualization, service virtualization, virtual machines and so on.
1.1.2 Virtualization category
(1) full Virtualization Architecture Virtual hardware-> Virtual operating system
The virtual machine monitor (hypervisor) is similar to the user's application running on the host's OS, such as VMware's workstation, a virtualization product that provides virtual hardware.
(2) OS layer virtualization architecture: do not virtualize the hardware and have the same kernel operating system.
(3) Virtualization of hardware layer
Virtualization at the hardware layer has high performance and isolation, because hypervisor runs directly on the hardware, which helps to control VM's OS access to hardware resources. Products using this solution are VMware ESXi and Xen server.
Hypervisor is an intermediate software layer running between the physical server and the operating system, which allows multiple operating systems and applications to share a set of basic physical hardware, so it can also be regarded as a "meta" operating system in a virtual environment, which can coordinate access to all physical devices and virtual machines on the server, also known as virtual machine monitor (Virtual Machine Monitor,VMM).
Hypervisor is the core of all virtualization technologies. When the server starts and executes Hypervisor, it allocates the right amount of memory, CPU, network, and disk to each virtual machine, and loads the guest operating system of all virtual machines. Host
Hypervisor is the core of all virtualization technologies, software and hardware architecture and management are more efficient and flexible, and the performance of hardware can be better developed. Common products are: VMware, KVM, Xen and so on. Openstack
1.2 what is Docker
1.2.1 Container technology is similar to OS layer virtualization architecture:
In the world of computers, containers have a long and legendary history. Container is different from hypervisor virtualization (hypervisor virtualization,HV), which virtualizes one or more independent machines to run on physical hardware through the middle tier, while containers are user space that runs directly on the operating system kernel. Therefore, container virtualization is also known as "operating system-level virtualization", and container technology allows multiple independent user spaces to run on the same host.
Because the "guest" is in the operating system, the container can only run the same or similar operating system as the underlying host, which does not seem very flexible. For example, you can run Redhat Enterprise Linux in a Ubuntu service, but not Microsoft Windows on a Ubuntu server.
Containers are considered insecure compared to completely isolated hypervisor virtualization. Opponents of this view argue that because the virtual container is virtualized by a complete operating system, this undoubtedly increases the scope of the attack and takes into account the potential exposure risk of the hypervisor layer.
Despite many limitations, containers are widely deployed in a variety of applications. Container technology is very popular in ultra-large-scale multi-tenant service deployment, lightweight sandboxie, and isolated environments that do not require too much security. One of the most common examples is the privilege isolation prison (chroot jail), which creates an isolated directory environment to run processes. If the process running in the permission isolation prison is breached by the intruder, the intruder will find himself in jail because of insufficient permissions and trapped in the directory created by the container, unable to further destroy the host.
The latest container technology introduces OpenVZ, Solaris Zones, and Linux containers (LXC). With these new technologies, containers are no longer just a simple running environment. Within your own permission class, the container is more like a complete host. For Docker, it benefits from modern Linux features, such as control group (control group), namespace (namespace) technology, the isolation between the container and the host is more complete, the container has an independent network and storage stack, but also has its own resource management capabilities, so that multiple containers in the same host can coexist amicably.
Containers are considered lean technologies because the cost of containers is limited. Compared with traditional virtualization and paravirtualization, the container does not need the simulation layer (emulation layer) and management layer (hypervisor layer), but uses the system call interface of the operating system. This reduces the cost of running a single container and allows more containers to run in the host.
Despite its glorious history, containers have not been widely recognized. A very important reason is the complexity of container technology: containers themselves are complex, difficult to install, and difficult to manage and automate. And Docker was born to change all that.
1.2.2 Container and virtual machine comparison
(1) essential difference
(2) differences in use
1.2.3 characteristics of Docker
(1) get started quickly.
Users can "Docker" their programs in just a few minutes. Docker relies on the "copy-on-write" (copy-on-write) model, so that the application can be modified very quickly, so that the code can be changed at will.
You can then create a container to run the application. Most Docker containers take less than a second to start. Due to the removal of the overhead of the management program, the Docker container has high performance, and more containers can be run in the same host, so that users can make full use of the system resources as much as possible.
(2) logical classification of responsibilities
With Docker, developers only need to care about the applications running in the container, while operators only need to care about how to manage the container. Docker is designed to enhance the consistency between the development environment in which developers write code and the production environment in which applications are deployed. In order to reduce the kind of "everything is normal during development, it must be the problem of operation and maintenance (the test environment is normal, and if something goes wrong after launch, it must be the problem of operation and maintenance)."
(3) Fast and efficient development life cycle
One of the goals of Docker is to shorten the cycle of code from development, testing to deployment and running online, so that your application is portable, easy to build, and easy to collaborate. (to put it more colloquially, Docker is like a box, which can hold a lot of objects. If you need these items, you can take away the big box directly instead of taking them one by one from the box. )
(4) encourage the use of service-oriented architecture
Docker also encourages service-oriented architectures and microservice architectures. Docker recommends that a single container runs only one application or process, which forms a distributed application model, in which applications or services can be represented as a series of interconnected containers, which makes it very easy to deploy applications, extend or debug applications, and improve the introspection of the program. (of course, you can run multiple applications in a container)
1.3 Docker components
1.3.1 Docker client and server
Docker is a client-server architecture program. The Docker client only needs to make a request to the Docker server or daemon, which will do all the work and return the result. Docker provides a command line tool, Docker, and a complete set of RESTful API. You can run the Docker daemon and client on the same host, or you can connect from a local Docker client to a remote Docker daemon running on another host.
1.3.2 Docker Mirror
Mirroring is the cornerstone of building Docker. Users run their own containers based on images. Mirroring is also a "build" part of the Docker lifecycle. Mirroring is a hierarchical structure based on the federated file system, which is built step by step by a series of instructions. For example:
Add a file; execute a command; open a window.
You can also use the image as the "source code" of the container. The image is very small, very "portable" and easy to share, store and update.
1.3.3 Registry (registry)
Docker uses Registry to save user-built images. Registry is divided into public and private. Docker operates a public Registry called Docker Hub. Users can register an account on Docker Hub, share and save their own images (note: it is very slow to download images in Docker Hub, and you can build your own private Registry).
1.3.4 Docker container
Docker can help you build and deploy containers, you just need to package your own applications or services into the container. The container is started based on an image, and one or more processes can be run in the container. We can think of the image as the build or packaging phase in the Docker life cycle, while the container is the startup or execution phase. The container starts based on the image. Once the container is started, we can log in to the container to install the software or services we need.
So the Docker container is:
A mirror format; some columns of standard operations; an execution environment.
Docker draws on the concept of standard containers. Standard containers carry goods around the world, and Docker applied this model to its own design, with the only difference: containers transport goods, while Docker transport software.
Like containers, Docker doesn't care about what's in the container, whether it's a web server, a database, or an application server. All containers "load" the content in the same way.
Docker also doesn't care where you ship the container: we can build the container in our laptop, upload it to Registry, download it to a physical or virtual server to test, and deploy the container to a specific host. Like standard containers, Docker containers are easy to replace, stacked, easy to distribute, and as versatile as possible.
With Docker, we can quickly build an application server, a message bus, a set of utilities, a continuous integration (CI) test environment, or any application, service, or tool. We can build a complete test environment locally, or we can quickly replicate a complex application stack for production or development.
Summary
The above is the whole content of this article. I hope the content of this article has a certain reference and learning value for everyone's study or work. Thank you for your support. If you want to know more about it, please see the relevant links below.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.