In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Kube-APIserver component introduction kube-APIserver provides the addition, deletion, modification and query of all kinds of resource objects (pod,RC,Service, etc.) in K8s and HTTP Rest interfaces such as watch, which is the data bus and data center of the whole system. The function of kube-APIserver provides the REST API interface of cluster management (including authentication and authorization, data validation and cluster state change) and provides the hub of data exchange and communication between other modules (other modules query or modify data through API Server, only API Server directly manipulates etcd) is the entry of resource quota control with complete cluster security mechanism kube-apiserver working schematic diagram
Access to k8s for kubernetes API is provided through the kube-apiserver process, which runs on a single k8s-master node. By default, there are two ports, local ports, which are used to receive HTTP requests. The default value of this port is 8080, which can be modified by the value of the API Server startup parameter "--insecure-port". The default IP address is "localhost", and the IP address can be modified by the value of "--insecure-bind-address". Non-authenticated or authorized HTTP requests access the API Server secure port through this port. The default value is 6443. The default value can be modified by the value of the startup parameter "--secure-port". The default IP address is a non-local (Non-Localhost) network port. Set this value through the startup parameter "--bind-address" this port is used to receive HTTPS requests for authentication based on Tocken files or client certificates and HTTP Base for policy-based authorization does not start HTTPS security access control kube-controller-manager component introduces kube-Controller Manager as the management control center within the cluster It is responsible for the management of Node, Pod replica, service endpoint (Endpoint), namespace (Namespace), service account (ServiceAccount) and resource quota (ResourceQuota) in the cluster. When a Node goes down unexpectedly, Controller Manager will discover and implement the automatic repair process in time to ensure that the cluster is always in the expected working state. The kube-scheduler component introduces that kube-scheduler is a component in the form of a plug-in, which is extensible and customizable because it exists in the form of a plug-in. Kube-scheduler is equivalent to the scheduling decision maker of the whole cluster, which determines the best scheduling location of the container through pre-selection and optimization process. The main criticism of kube-scheduler (scheduler) is to find the most suitable node in the cluster for the newly created pod, and schedule the pod to Node from all the nodes in the cluster, select all the nodes that can run the pod according to the scheduling algorithm, and then select the optimal node from the above node node as the final result. The Scheduler scheduler runs on the master node. Its core function is to listen to apiserver to get the pod with empty PodSpec.NodeName. Then create a binding for pod to indicate which node pod should be dispatched to, and the scheduling result is written into the apiserverkube-scheduler main responsibility cluster with high availability: if kube-scheduler sets the leader-elect election startup parameters, then node selection will be conducted through etcd (kube-scheduler and kube-controller-manager both use a highly available scheme with one master and multiple slaves) to schedule resource monitoring: listen for changes in resources on the kube-apiserver through list-Watch mechanism The resources here mainly refer to the allocation of Pod and Node scheduling nodes: through pre-selection (Predicates) and preferred (Priorites) strategies, a Node is assigned to the scheduled Pod to bind and populate the nodeName, and the allocation result is written to the etcd experimental deployment experimental environment Master01:192.168.80.12Node01:192.168.80.13Node02:192.168.80.14 through kube-apiserver. This experimental deployment is deployed after the previous article, so the experimental environment remains unchanged. The main purpose of this deployment is to deploy the components required by the master node, kube-APIserver components, and deploy master01 server operations. Configure apiserver self-signed certificate [root@master01 k8s] # cd / mnt/ to enter the host mount directory [root@master01 mnt] # lsetcd-cert etcd-v3.3.10-linux-amd64.tar.gz k8s-cert.sh master.zipetcd-cert.sh flannel.sh kubeconfig.sh node .zipetcd.sh flannel-v0.10.0-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz [root@master01 mnt] # cp master.zip / root/k8s/ copy the package to the k8s working directory [root@master01 mnt] # cd / root/k8s/ enter the k8s working directory [root@master01 k8s] # lscfssl.sh etcd-v3.3.10-linux-amd64 Kubernetes-server-linux-amd64.tar.gzetcd-cert etcd-v3.3.10-linux-amd64.tar.gz master.zipetcd.sh flannel-v0.10.0-linux-amd64.tar.gz [root@master01 K8s] # unzip master.zip / / extract the package Archive: master.zipinflating: apiserver.shinflating: controller-manager.shinflating: scheduler.sh [root@master01 K8s] # mkdir / opt/kubernetes/ {cfg Bin,ssl}-p / / create a working directory in master01 Previously, we have also created a working directory in the node node [root@master01 K8s] # mkdir k8s-cert / / create a self-signed certificate directory [root@master01 K8s] # cp / mnt/k8s-cert.sh / root/k8s/k8s-cert / / move the mounted self-signed certificate script to the self-signed certificate directory [root@master01 K8s] # cd k8s-cert / / in the K8s working directory [root@] Master01 k8s-cert] # vim k8s-cert.sh / / Edit the copied script file... cat > server-csr.json
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.