In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the detailed construction process of kubernetes". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "the detailed construction process of kubernetes"!
Environment description:
Two machines, 167,168, the system is centos6.5
167It can run etcd,flannel,kube-apiserver,kube-controller-manager,kube-scheduler and act as minion itself, so it can also run kube-proxy and kubelet.
Only need to run etcd,flannel,kube-proxy and kubelet,etcd and flannel in order to get through the network of two machines
K8s is based on docker, so docker is a must.
Build and open up the network in the environment
K8s also needs the support of etcd and Flannel. Download these two packages first, and note that both machines need to download and execute them.
Wget https://github.com/coreos/etcd/releases/download/v2.2.4/etcd-v2.2.4-linux-amd64.tar.gzwget https://github.com/coreos/flannel/releases/download/v0.5.5/flannel-0.5.5-linux-amd64.tar.gz
Extract separately and then add to the environment variable
Cd etcd-v2.2.4-linux-amd64/cp etcd etcdctl / usr/bin/cd flannel-0.5.5/cp flanneld mk-docker-opts.sh / usr/bin
Running
Run etcd-name infra0-initial-advertise-peer-urls http://172.16.48.167:2380-listen-peer-urls http://172.16.48.167:2380-listen-client-urls http://172.16.48.167:2379, on # 167 Run etcd-name infra1-initial-advertise-peer-urls http://203.130.48.168:2380-listen-peer-urls http:// on http://127.0.0.1:2379-advertise-client-urls http://172.16.48.167:2379-discovery https://discovery.etcd.io/322a6b06081be6d4e89fd6db941c4add-- data-dir / usr/local/kubernete_test/flanneldata > > / usr/local/kubernete_test/logs/etcd.log 2 > & 1 & # 168 203.130.48.168 purl 2380-listen-client-urls http://203.130.48.168:2379, Http://127.0.0.1:2379-advertise-client-urls http://203.130.48.168:2379-discovery https://discovery.etcd.io/322a6b06081be6d4e89fd6db941c4add-data-dir / usr/local/kubernete_test/flanneldata > > / usr/local/kubernete_test/logs/etcd.log 2 > & 1 &
Note the-discovery parameter in the middle. This is a url address, which we can get by visiting https://discovery.etcd.io/new?size=2. Size represents the number of minion. Here we have two machines using the same url address. If we visit this address, we will find that a pile of json string is returned. This server can also be built by ourselves.
So it starts successfully, and then we can execute it on any machine.
Etcdctl lsetcdctl cluster-health
To confirm that it has been started successfully. If there is any error, you can check the log file.
Tail-n 1000-f / usr/local/kubernete_test/logs/etcd.log
And then execute it on any machine.
Etcdctl set / coreos.com/network/config'{"Network": "172.17.0.0 Universe 16"}'
Execution
[root@w ~] # etcdctl ls / coreos.com/network/subnets/coreos.com/network/subnets/172.17.4.0-24/coreos.com/network/subnets/172.17.13.0-24 [root@w ~] # etcdctl get / coreos.com/network/subnets/172.17.4.0-24 {"PublicIP": "203.130.48.168"} [root@w ~] # etcdctl get / coreos.com/network/subnets/172.17.13.0 -24 {"PublicIP": "203.130.48.167"}
You can see that the network segment on 167 is 172.17.4.13 Universe 24.
The IP of the docker container we built later is in these two IP address ranges.
Then execute on each of the two machines
Flanneld > > / usr/local/kubernete_test/logs/flanneld.log 2 > & 1 &
Execute on each machine:
Mk-docker-opts.sh-isource / run/flannel/subnet.envrm / var/run/docker.pidifconfig docker0 ${FLANNEL_SUBNET}
Then restart docker
Service docker restart
In this way, the network of the containers on the two machines is opened, and the effect can be seen later.
Install and start k8swget https://github.com/kubernetes/kubernetes/releases/download/v1.2.0-alpha.6/kubernetes.tar.gz
Then all kinds of decompression
Tar zxvf kubernetes.tar.gz cd kubernetes/servertar zxvf kubernetes-server-linux-amd64.tar.gz # this is the package cd kubernetes/server/bin/ that we need to execute the command
Copy the command to the environment variable, here I only copy kubectl
Cp kubectl / usr/bin/
Execute on 167
. / kube-apiserver-- address=0.0.0.0-- insecure-port=8080-- service-cluster-ip-range='172.16.48.167/24'-- log_dir=/usr/local/kubernete_test/logs/kube-- kubelet_port=10250-- logtostderr=false-- etcd_servers= http://172.16.48.167:2379-- allow_privileged=false > > / usr/local/kubernete_test/logs/kube-apiserver.log 2 > & 1 &. / kube-controller-manager -- log_dir=/usr/local/kubernete_test/logs/kube 0-- logtostderr=false-- log_dir=/usr/local/kubernete_test/logs/kube-- master=172.16.48.167:8080 > > / usr/local/kubernete_test/logs/kube-controller-manager 2 > & 1 &. / kube-scheduler-- master='172.16.48.167:8080'-- vault 0-- log_dir=/usr/local/kubernete_test/logs/kube > > / usr/local/kubernete_test/logs/kube-scheduler.log 2 > & 1 &
This makes master run.
[root@w ~] # kubectl get componentstatusesNAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
We can see that they are all running healthily.
Then we will happily run the programs needed by minion on 2 machines (Note 167is also minion)
# 167./kube-proxy-- logtostderr=false-- master= http://172.16.48.167:8080 > > / usr/local/kubernete_test/logs/kube-proxy.log 2 > & 1 &. / kubelet-- logtostderr=false-- vroom0-- allow-privileged=false-- log_dir=/usr/local/kubernete_test/logs/kube-- address=0.0.0.0-- port=10250-- hostname_override=172.16.48.167-- api_servers= http://172 .16.48.167: 8080 > > / usr/local/kubernete_test/logs/kube-kubelet.log 2 > & 1 & # 168./kube-proxy-- logtostderr=false-- master= http://172.16.48.167:8080 > > / usr/local/kubernete_test/logs/kube-proxy.log 2 > & 1 &. / kubelet-- logtostderr=false-- vault 0-- allow-privileged=false-- log_dir=/usr/local/kubernete_test/logs/kube-- address=0.0 .0.0-- port=10250-- hostname_override=172.16.48.97-- api_servers= http://172.16.48.167:8080 > > / usr/local/kubernete_test/logs/kube-kubelet.log 2 > & 1 &
To confirm that the startup is successful
[root@w ~] # kubectl get nodesNAME LABELS STATUS AGE172.16.48.167 kubernetes.io/hostname=172.16.48.167 Ready 1d172.16.48.168 kubernetes.io/hostname=172.16.48.168 Ready 18h
Both minion are Ready.
Submit command
K8s supports two ways, one is directly through the command parameters, the other is through the configuration file, if the configuration file supports json and yaml, the following is only through the command parameters
Establish rc and podkubectl run nginx-- image=nginx-- port=80-- replicas=5
This creates a rc and five pod
You can view the following command
Kubectl get rc,pods
If we delete the established pod manually, K8s will automatically restart one, always ensuring that the number of pod is 5
Communication between machines
When we look at it with docker ps on 167and 168respectively, we will find that there is a nginx container running on the two machines. We randomly find a container to enter on the two machines. If we use ip a to check the IP address, we will find that the IP of each other on 167is 172.17.13.0 in 24, 172.17.4.0 in 24, and 172.17.4.0 in 24th, indicating that the network has been connected. If the host can be connected to the external network, it can also access the public network in the container.
If we start the container directly through docker instead of K8, we will find that the IP end of the started container is also within our above two IP segments, and the network of the container started by K8 is interconnected.
Of course, the random distribution of IP end and the IP of private network will cause us some trouble.
For example, we usually do this: start the container through docker, and then assign it a fixed IP address through pipework, which can be either private network IP or public network IP. Is it hot, so will the container started by K8s want to communicate with them?
The answer is half-way, that is, the container launched through K8s can access the private network IP and public network IP of the container set by pipework, but not vice versa. The container set by pipework cannot access the container started by K8s. Although this is the case, it does not affect our general needs, because the containers we usually start through K8s are web applications, and those that set fixed IP through pipework are databases and so on. Just enough to meet the needs of accessing the database from web applications
Expose servicekubectl expose rc nginx-- port=80-- container-port=9090-- external-ip=x.x.x.168
The port parameter is the port of the container. Because nginx uses 80, it must be 80 here.
Container-port and target-port have the same meaning, and refer to the port forwarded by the host. You can specify one at will or not.
External-ip refers to the exposed ip address, usually using the public network IP address. After executing that command, we can access it on the public network, but there is a problem here is that this IP address must be the IP of a machine with K8s installed. If you cannot access it with a random IP, it will also cause inconvenience to the application.
View service
Kubectl get svc
You can see CLUSTER_IP and EXTERNAL_IP
At this point, I believe you have a deeper understanding of the "detailed construction process of kubernetes". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.