In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Original articles, welcome to reprint. Reprint please indicate: reproduced from IT Story Association, thank you!
Original link address: kubernetes basic cluster deployment of docker (part I) (33)
Continue the deployment of the simple version of the cluster this time. Source code: https://github.com/limingios/msA-docker K8s branch and https://github.com/limingios/kubernetes-starter
Basic cluster deployment-introduction to kubernetes-simple deployment ETCD (master node)
ETCD ensures the storage of data, high availability of data, and data consistency, which is similar to zookeeper. Kubernetes needs to store a lot of things, such as its own node information, component information, and pod,deployment,service running through kubernetes, and so on. All need to be persisted. Etcd is its data center. In order to ensure the high availability of the data center and data consistency in the production environment, at least three nodes are generally deployed. We focus on learning here and deploy only one instance on the master node.
If your environment already has etcd services (whether singleton or clustered), you can ignore this step. The premise is that you fill in your own etcd endpoint when generating the configuration.
Deployment
We are ready to configure the binaries and services for etcd, and the goal now is to turn it into a system service and start it. (this is to be operated on the primary node, on server01)
# copy the service configuration file to the system service directory cp ~ / kubernetes-starter/target/master-node/etcd.service / lib/systemd/system/#enable service systemctl enable etcd.service# create a working directory (where the data is stored) mkdir-p / var/lib/etcd# start the service service etcd start# check the service log to see if there is an error message, make sure the service is normal journalctl-f-u etcd.service# check the online port 2379 2380 netstat-ntlp
View the configuration of etcd
The WorkingDirectory working directory configuration file exists in this path
Commands executed by ExecStart
Name name
Listen-client-urls listening node
The address that advertise-client-urls recommends for others to access
Data-dir data directory
Vi / lib/systemd/system/etcd.service
PS: prompt that start etcd has been started
Introduction to deploying APIServer (Master Node)
Kube-apiserver is one of the most important core components of Kubernetes, which mainly provides the following functions
Provide REST API interfaces for cluster management, including authentication and authorization (which we don't use right now) and hubs that provide data exchange and communication between other modules such as cluster status changes (other modules query or modify data through API Server, only API Server directly operates etcd)
In order to ensure the high availability of apiserver, production environments usually deploy 2 + nodes, and do a lb on the upper layer to do load balancing, such as haproxy. Since there is no difference between single node and multi-node at the apiserver layer, it is sufficient for us to learn to deploy one node.
Deployment
APIServer is also deployed through system services. The deployment process is exactly the same as etcd and is no longer annotated
Cp kubernetes-starter/target/master-node/kube-apiserver.service / lib/systemd/system/systemctl enable kube-apiserver.serviceservice kube-apiserver startjournalctl-f-u kube-apiserver
Key configuration instructions
[Unit]
Description=Kubernetes API Server
...
[Service]
# location of executable file
ExecStart=/home/michael/bin/kube-apiserver\
# listener address bound to a non-secure port (8080). Here, all addresses are monitored.
-- insecure-bind-address=0.0.0.0\
# do not use https
-- kubelet-https=false\
# address range of virtual ip of kubernetes cluster
-- service-cluster-ip-range=10.68.0.0/16\
# Port range limit of nodeport for service
-service-node-port-range=20000-40000\
# many places need to deal with etcd, and it is the only module that can operate etcd directly
-- etcd-servers= http://192.168.1.102:2379\
...
Introduction to deploying ControllerManager (Master Node)
Controller Manager, which consists of kube-controller-manager and cloud-controller-manager, is the brain of Kubernetes. It monitors the status of the entire cluster through apiserver and ensures that the cluster is working as expected.
Kube-controller-manager consists of a series of controllers, such as Replication Controller control copy, Node Controller node control, Deployment Controller management deployment, and so on.
Cloud-controller-manager is only needed when Kubernetes enables Cloud Provider to cooperate with the control of cloud service providers
The functions of controller-manager, scheduler and apiserver are closely related, generally running on the same machine, we can look at them as a whole, so to ensure the high availability of apiserver is to ensure the high availability of the three modules. Multiple controller-manager processes can also be started at the same time, but only one will be elected to serve leader.
Deployment
Deploy through system services
Cp ~ / kubernetes-starter/target/master-node/kube-controller-manager.service / lib/systemd/system/systemctl enable kube-controller-manager.serviceservice kube-controller-manager startjournalctl-f-u kube-controller-manager key configuration instructions
[Unit]
Description=Kubernetes Controller Manager
...
[Service]
ExecStart=/home/michael/bin/kube-controller-manager\
# the listening address of the external service, which means that only local programs can access it
-- address=127.0.0.1\
# url of apiserver
-- master= http://127.0.0.1:8080\
# Service virtual ip scope, which is the same as apiserver configuration
-- service-cluster-ip-range=10.68.0.0/16\
# ip address range of pod
-- cluster-cidr=172.20.0.0/16\
# the following two indicate that the certificate is not used and the default value is overwritten with a null value
-- cluster-signing-cert-file=\
-- cluster-signing-key-file=\
...
PS: next time continue to complete the basic construction of K8s. This hole is very big, pay attention to your own symbol punctuation, I just wrote the port signature colon as a dot. It took four hours for various tests to find out. If there are some mistakes in it, change 192.168.1.101.2379 to 192.168.1.101purl 2379
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.