In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
I. brief introduction
Introduction to Rancher
Official source: https://www.cnrancher.com/
Rancher is an open source enterprise container management platform. With Rancher, enterprises no longer have to use a series of open source software to build a container service platform from scratch. Rancher provides a full-stack container deployment and management platform for managing Docker and Kubernetes used in production environments.
Rancher consists of the following four parts:
1.1. Infrastructure layout
Rancher can use Linux host resources from any public or private cloud. The Linux host can be a virtual machine or a physical machine. Rancher only requires the host to have CPU, memory, local disk and network resources. From the perspective of Rancher, the cloud host provided by a cloud vendor is the same as a physical machine of its own.
Rancher implements a layer of flexible infrastructure services for running containerized applications. Rancher's infrastructure services include network, storage, load balancing, DNS and security modules. Rancher's infrastructure services are also deployed through containers, so the same Rancher infrastructure services can run on any Linux host.
1.2. Container scheduling and scheduling
Many users will choose to use the container orchestration scheduling framework to run containerized applications. Rancher includes all the current mainstream orchestration and scheduling engines, such as Docker Swarm, Kubernetes, and Mesos. The same user can create a Swarm or Kubernetes cluster. And you can use native Swarm or Kubernetes tools to manage applications.
In addition to Swarm,Kubernetes and Mesos, Rancher also supports its own Cattle container orchestration scheduling engine. Cattle is widely used to orchestrate Rancher's own infrastructure services as well as to configure, manage and upgrade Swarm clusters, Kubernetes clusters and Mesos clusters.
1.3. App Store
Rancher users can deploy applications made up of multiple containers in the app store with one click. Users can manage the deployed application and automate upgrades when a new version of the application is available. Rancher provides an app store maintained by the Rancher community, which includes a range of popular applications. Rancher users can also create their own private app stores.
1.4. Enterprise-level rights management
Rancher supports flexible plug-in user authentication. Active Directory,LDAP, Github and other authentication methods are supported. Rancher supports role-based access control (RBAC) at the environment level, which allows a user or group of users to configure access to a development or production environment through roles.
The following figure shows the main components and functions of Rancher:
Introduction to Kubernetes
1.1 basic concepts
Kubernetes (usually written as "K8s") Kubernetes is Google's open source container cluster management system. Its design goal is to provide a platform that can be deployed automatically, expandable and operable for application containers between host clusters. Kubernetes usually works with docker container tools and integrates multiple host clusters running docker containers. Kubernetes supports not only Docker but also Rocket, another container technology.
Functional features:
Automate container deployment and replication
Expand or shrink container size at any time
Organize containers into groups to provide load balancing between containers
Quick update and rollback of container version
Provide elastic scaling and replace a container if it fails
1.2 Architecture Diagram
1.3 components
1.3.1 Master
The Master node is mainly composed of four modules: APIServer, scheduler, controller manager, and etcd
APIServer:APIServer is responsible for providing the Kubernetes API service of RESTful, which is the unified entry of system management instructions. Any operation to add, delete, modify or query resources should be handed over to APIServer and then submitted to etcd. As shown in the architecture diagram, kubectl (the client-side tool provided by Kubernetes, which is internally a call to Kubernetes API) interacts directly with APIServer.
The responsibility of schedule:scheduler is clear: it is responsible for dispatching pod to the appropriate Node. If you think of scheduler as a black box, its input is pod and a list of multiple Node, and the output is the binding of Pod and a Node, that is, the pod is deployed to this Node. Kubernetes currently provides scheduling algorithms, but it also retains the interface, and users can define their own scheduling algorithms according to their own needs.
Controller manager: if APIServer does the "foreground" job, then controller manager is in charge of the "backstage". Each resource generally has a controller, and controller manager is responsible for managing these controllers. For example, we create a pod through APIServer, and when the pod is created successfully, the task of APIServer is complete. After that, it is up to controller manager to ensure that the state of Pod is always the same as we expected.
Etcd:etcd is a highly available key storage system, which is used by Kubernetes to store the state of each resource, thus implementing the API of Restful.
1.3.2 Node
Each Node node is mainly composed of three modules: kubelet, kube-proxy and runtime.
Runtime refers to the container running environment. Currently, Kubernetes supports both docker and rkt containers.
Kube-proxy: this module implements the service discovery and reverse proxy functions in Kubernetes. Reverse proxy: kube-proxy supports TCP and UDP connection forwarding, and forwards client traffic to a set of backend pod corresponding to service based on Round Robin algorithm by default. In terms of service discovery, kube-proxy uses etcd's watch mechanism to monitor the dynamic changes of service and endpoint object data in the cluster, and maintains a mapping relationship from service to endpoint, thus ensuring that the IP changes of the back-end pod will not affect visitors. In addition, kube-proxy supports session affinity.
Kubelet:Kubelet is the agent of Master on each Node node, and it is the most important module on the Node node. It is responsible for maintaining and managing all containers on the Node, but it does not manage if the container is not created through Kubernetes. In essence, it is responsible for making the running state of the Pod consistent with the desired state.
1.3.3 Pod
Pod is the smallest unit for resource scheduling in K8s. One or more closely related business containers run in each Pod. These business containers share the IP and Volume of this Pause container. We use this immortal Pause container as the root container of Pod, and its status represents the status of the entire container group. Once a Pod is created, it is stored in Etcd, and then dispatched by Master to a Node binding, which is instantiated by Kubelet on that Node.
Each Pod is assigned a separate Pod IP,Pod IP + ContainerPort to form an Endpoint.
1.3.4 Service
The function of Service exposes applications. Pods has a life cycle and an independent IP address. With the creation and destruction of Pods, an essential task is to ensure that each application can perceive this change. This brings us to Service, which is a logical combination defined by YAML or JSON by Pods through some policy. More importantly, Pods's independent IP needs to be exposed to the network through Service.
II. Preparatory work
2.1. System environment
Hostnam
System
IP
Action
Master
CentOS7.4
192.168.56.129
Main control node
Slave1
CentOS7.4
192.168.56.130
Service node
The following two nodes need to be configured
2.2.After checking the hosts-- configuration, check whether the public network can be resolved.
192.168.56.129 master
192.168.56.130 slave1
Temporarily shut down the firewall and seLinux
2.4. enable IPV4 forwarding
Add the following parameters to / etc/sysctl.conf
Net.ipv4.ip_forward = 1
Net.ipv4.ip_forward_use_pmtu = 0
Effective order:
[root@master] # sysctl-p
View
[root@master ~] # sysctl-a | grep "ip_forward"
2.5. Close the Swap switching partition
Install the Docker1.12.6 version
Which version of Docker can adapt to Rancher and Kubernetes?
Please refer to: http://rancher.com/docs/rancher/v1.6/zh/hosts/#docker
1) execute the command:
[root@master ~] # mkdir-p ~ / _ src
[root@master ~] # cd ~ / _ src/
[root@master _ src] # wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-selinux-1.12.6-1.el7.centos.noarch.rpm
[root@master _ src] # wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-1.12.6-1.el7.centos.x86_64.rpm
[root@master _ src] # wget http://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-debuginfo-1.12.6-1.el7.centos.x86_64.rpm
Installation
[root@master _ src] # yum localinstall-y docker-engine-selinux-1.12.6-1.el7.centos.noarch.rpm docker-engine-1.12.6-1.el7.centos.x86_64.rpm docker-engine-debuginfo-1.12.6-1.el7.centos.x86_64.rpm
2) start
[root@master ~] # systemctl enable docker
[root@master ~] # systemctl start docker
3) View version
[root@master ~] # docker version
2.7. set Docker image acceleration
If you use the docker pull command to download the image, you will locally connect to the hub.docker.com website to download the image, which takes a long time. Therefore, we can set the docker image acceleration to make the local connection go to the domestic image warehouse to download. There are many settings for image acceleration. This chapter takes the setting of Aliyun as an example. The steps are as follows:
1) create a directory:
[root@master ~] # mkdir / etc/docker
2) set the address of the image repository:
Tee / etc/docker/daemon.json
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.