In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
I. Environmental preparation
(every machine is a centos7.6)
Each machine executes:
Yum install chronyd-y
Systemctl start chronyd
Vim / etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.130 master
192.168.8.131 node01
192.168.8.132 node02
192.168.8.133 node03
Systemctl disable firewalld
Systemctl stop firewalld
Setenforce 0 takes effect provisionally
Vim / etc/selinux/config
SELINUX=disabled
It is permanent but needs to be rebooted.
Configure docker Mirror sourc
Visit mirrors.aliyun.com, find docker-ce, click linux, click centos, right-click docker-ce.repo copy link address
Cd / etc/yum.repos.d/
Wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
-- 2019-05-19 17purl 39Rose 51muri-https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Parsing host mirrors.aliyun.com (mirrors.aliyun.com)...
The command is also executed on the other three machines
Next, execute on the master node:
Modify yum source
[root@master ~] # cd / etc/yum.repos.d/
[root@master yum.repos.d] # ls
CentOS-Base.repo CentOS-fasttrack.repo CentOS-Vault.repo epel-testing.repo
CentOS-CR.repo CentOS-Media.repo docker-ce.repo kubernetes.repo
CentOS-Debuginfo.repo CentOS-Sources.repo epel.repo
[root@master yum.repos.d] # vim CentOS-Base.repo
[base]
Name=CentOS-$releasever-Base
# mirrorlist= http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
# baseurl= http://mirror.centos.org/centos/$releasever/os/$basearch/
Baseurl= https://mirrors.aliyun.com/centos/$releasever/os/$basearch/
Gpgcheck=1
Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
# released updates
[updates]
Name=CentOS-$releasever-Updates
# mirrorlist= http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
# baseurl= http://mirror.centos.org/centos/$releasever/updates/$basearch/
Baseurl= https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
Gpgcheck=1
Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
# additional packages that may be useful
[extras]
Name=CentOS-$releasever-Extras
# mirrorlist= http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
# baseurl= http://mirror.centos.org/centos/$releasever/extras/$basearch/
Baseurl= https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
Gpgcheck=1
Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Change the baseurl of base updates and extras to Ali's. Save exit and send it to the other three work
Scp / etc/yum.repos.d/CentOS-Base.repo node01:/etc/yum.repos.d/
Scp / etc/yum.repos.d/CentOS-Base.repo node02:/etc/yum.repos.d/
Scp / etc/yum.repos.d/CentOS-Base.repo node03:/etc/yum.repos.d/
Yum install docker-ce-y
Systemctl enable docker
Systemctl start docker
Modify docker startup parameters
[root@master ~] # vim / usr/lib/systemd/system/docker.service
Add this under [Service]
ExecStartPost=/usr/sbin/iptables-P FORWARD ACCEPT
Reload docker
Systemctl daemon-reload
Systemctl restart docker
View all rules of the filter table
[root@master ~] # iptables-vnL
Chain INPUT (policy ACCEPT 1307 packets, 335K bytes)
Pkts bytes target prot opt in out source destination
2794 168K KUBE-SERVICES all-0.0.0.0It is 0 0.0.0.0It is 0 ctstate NEW / kubernetes service portals /
2794 168K KUBE-EXTERNAL-SERVICES all-0.0.0.0It is 0 0.0.0.0It is 0 ctstate NEW / kubernetes externally-visible service portals /
773K 188m KUBE-FIREWALL all-0.0.0.0Universe 0 0.0.0.0Universe 0
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Pkts bytes target prot opt in out source destination
0 0 KUBE-FORWARD all-- 0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 / kubernetes forwarding rules /
0 0 KUBE-SERVICES all-- 0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 ctstate NEW / kubernetes service portals /
0 0 DOCKER-USER all-- 0. 0. 0. 0. 0. 0
0 0 DOCKER-ISOLATION-STAGE-1 all-- 0. 0. 0. 0. 0. 0
0 0 ACCEPT all-- docker0 0.0.0.0 docker0 0 0.0.0.0 Universe 0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all-- docker0 0. 0. 0. 0.
0 0 ACCEPT all-- docker0! docker0 0. 0. 0. 0. 0.
0 0 ACCEPT all-- docker0 docker0 0. 0. 0. 0.
Send to three work
Scp / usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
Scp / usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
Scp / usr/lib/systemd/system/docker.service node03:/usr/lib/systemd/system/docker.service
View the system parameters of bridge
[root@master ~] # sysctl-a | grep bridge
Net.bridge.bridge-nf-call-arptables = 0
Net.bridge.bridge-nf-call-ip6tables = 1
Net.bridge.bridge-nf-call-iptables = 1
Net.bridge.bridge-nf-filter-pppoe-tagged = 0
Net.bridge.bridge-nf-filter-vlan-tagged = 0
Net.bridge.bridge-nf-pass-vlan-input-dev = 0
Sysctl: reading key "net.ipv6.conf.all.stable_secret"
Sysctl: reading key "net.ipv6.conf.default.stable_secret"
Sysctl: reading key "net.ipv6.conf.docker0.stable_secret"
Sysctl: reading key "net.ipv6.conf.ens33.stable_secret"
Sysctl: reading key "net.ipv6.conf.lo.stable_secret"
Bold items have different values in different environments. Add configuration to ensure that their values are 1.
[root@master ~] # vim / etc/sysctl.d/k8s.conf
Net.bridge.bridge-nf-call-ip6tables = 1
Net.bridge.bridge-nf-call-iptables = 1
~
Reread it
[root@master] # systctl-p / etc/sysctl.d/k8s.conf
Scp / etc/sysctl.d/k8s.conf node01:/etc/sysctl.d/
Scp / etc/sysctl.d/k8s.conf node02:/etc/sysctl.d/
Scp / etc/sysctl.d/k8s.conf node03:/etc/sysctl.d/
Create a kubernetes.repo file locally
[root@master ~] # cd / etc/yum.repos.d/
[root@master yum.repos.d] # vim kubernetes.repo
[kubernetes]
Name=Kubernetes Repository
Baseurl= https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
Gpgcheck=1
Gpgkey= https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Also find kubernetes on Ali Mirror website, click yum, click repos, and find kubernetes-el7-x86_64/.
The link address of the baseurl in the file is kubernetes-el7-x86_64/
The two addresses in gpgkey are the two linked addresses in doc in the directory above.
Yum repolist, check.
View packages at the beginning of kube
[root@master yum.repos.d] # yum list all | grep "^ kube"
Kubeadm.x86_64 1.14.2-0 @ kubernetes
Kubectl.x86_64 1.14.2-0 @ kubernetes
Kubelet.x86_64 1.14.2-0 @ kubernetes
Kubernetes-cni.x86_64 0.7.5-0 @ kubernetes
Kubernetes.x86_64 1.5.2-0.7.git269f928.el7 extras
Kubernetes-ansible.noarch 0.6.0-0.1.gitd65ebd5.el7 epel
Kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 extras
Kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 extras
Kubernetes-node.x86_64 1.5.2-0.7.git269f928.el7 extras
Installation tool
Yum install-y kubeadm kubectl kubelet
Modify kubelet parameters (used by kubeadm)
[root@master yum.repos.d] # vim / etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS= "--fail-swap-on=false"
Check the default parameters for cluster initialization
[root@master yum.repos.d] # kubeadm config print init-defaults
ApiVersion: kubeadm.k8s.io/v1beta1
BootstrapTokens:
-groups:
-system:bootstrappers:kubeadm:default-node-token
Token: abcdef.0123456789abcdef
Ttl: 24h0m0s
Usages:
\-signing
\-authentication
Kind: InitConfiguration
LocalAPIEndpoint:
AdvertiseAddress: 1.2.3.4
BindPort: 6443
NodeRegistration:
CriSocket: / var/run/dockershim.sock
Name: master
Taints:
-effect: NoSchedule
Key: node-role.kubernetes.io/master
-
ApiServer:
TimeoutForControlPlane: 4m0s
ApiVersion: kubeadm.k8s.io/v1beta1
CertificatesDir: / etc/kubernetes/pki
ClusterName: kubernetes
ControlPlaneEndpoint: ""
ControllerManager: {}
Dns:
Type: CoreDNS
Etcd:
Local:
DataDir: / var/lib/etcd
ImageRepository: k8s.gcr.io
Kind: ClusterConfiguration
KubernetesVersion: v1.14.0
Networking:
DnsDomain: cluster.local
PodSubnet: ""
ServiceSubnet: 10.96.0.0/12
Scheduler: {}
The next step is to initialize the cluster. During the initialization process, a container is created, and the container image is pulled from k8s.gcr.io by default. If you cannot access the image outside, you can view the desired image and pull it from Aliyun. For more steps, please see another blog, https://blog.51cto.com/13670314/2397600.
[root@master ~] # kubeadm config images list
I0521 13 version.go:96 32 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0521 13:32:40.122220 26344 version.go:97] falling back to the local client version: v1.14.2
K8s.gcr.io/kube-apiserver:v1.14.2
K8s.gcr.io/kube-controller-manager:v1.14.2
K8s.gcr.io/kube-scheduler:v1.14.2
K8s.gcr.io/kube-proxy:v1.14.2
K8s.gcr.io/pause:3.1
K8s.gcr.io/etcd:3.3.10
K8s.gcr.io/coredns:1.3.1
Ignoring the above error report is caused by the inability to access the external network.
Kubeadm init-- pod-network-cidr= "10.244.0.0swap 16"-- ignore-preflight-errors=Swap
Upon success, it will be displayed:
Record the last join command, which will be used to join the cluster later.
View Node
[root@master ~] # kubectl get nodes
NAME STATUS ROLES AGE VERSION
Master NotReady master 145m v1.14.2
Status is NotReady. We need to deploy network plug-ins.
Deploy flannel. The image in the configuration file here is pulled from quay.io. You can access it domestically. Don't worry.
[root@master] # kubectl apply-f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Podsecuritypolicy.extensions/psp.flannel.unprivileged created
Clusterrole.rbac.authorization.k8s.io/flannel created
Clusterrolebinding.rbac.authorization.k8s.io/flannel created
Serviceaccount/flannel created
Configmap/kube-flannel-cfg created
Daemonset.extensions/kube-flannel-ds-amd64 created
Daemonset.extensions/kube-flannel-ds-arm64 created
Daemonset.extensions/kube-flannel-ds-arm created
Daemonset.extensions/kube-flannel-ds-ppc64le created
Daemonset.extensions/kube-flannel-ds-s390x created
View the pod in the system namespace
[root@master] # kubectl get pods-n kube-system
NAME READY STATUS RESTARTS AGE
Coredns-fb8b8dccf-q55g7 1/1 Running 0 150m
Coredns-fb8b8dccf-vk7td 1/1 Running 0 150m
Etcd-master 1/1 Running 0 149m
Kube-apiserver-master 1/1 Running 0 149m
Kube-controller-manager-master 1/1 Running 0 149m
Kube-flannel-ds-amd64-gfl77 1/1 Running 0 71s
Kube-proxy-4s9f6 1/1 Running 0 150m
Kube-scheduler-master 1/1 Running 0 149m
The first two may be in the state of creation. Just wait a minute.
[root@master ~] # kubectl get nodes
NAME STATUS ROLES AGE VERSION
Master Ready master 152m v1.14.2
Send to the other three work
[root@master ~] # scp / etc/yum.repos.d/kubernetes.repo node01:/etc/yum.repos.d/
Root@node01's password:
Kubernetes.repo 100% 269 169.4KB/s 00:00
[root@master ~] # scp / etc/yum.repos.d/kubernetes.repo node02:/etc/yum.repos.d/
Root@node02's password:
Kubernetes.repo 100% 269 277.9KB/s 00:00
[root@master ~] # scp / etc/yum.repos.d/kubernetes.repo node03:/etc/yum.repos.d/
Root@node03's password:
Kubernetes.repo
Next, add three work to the cluster and execute the command on node01 02 03
[root@node01 ~] # yum install-y kubeadm kubelet
Then go to master to copy the files.
[root@master ~] # scp / etc/sysconfig/kubelet node01:/etc/sysconfig/
Root@node01's password:
Kubelet 100% 42 32.7KB/s 00:00
[root@master ~] # scp / etc/sysconfig/kubelet node02:/etc/sysconfig/
Root@node02's password:
Kubelet 100% 42 32.9KB/s 00:00
[root@master ~] # scp / etc/sysconfig/kubelet node03:/etc/sysconfig/
Root@node03's password:
Kubelet 100% 42 29.4KB/s 00:00
[root@master ~] #
First pull the pause image of Ali Warehouse on work
[root@node01] # kubeadm join 192.168.130 token kxmqr4.1vza1kh70vra2d2u-- discovery-token-ca-cert-hash sha256:6537d556e18c1799f10ac567dcaa41ee2b3197aa4c464747bc50243a6142bc1c-- ignore-preflight-errors=Swap
View Node
[root@master /] # kubectl get nodes
NAME STATUS ROLES AGE VERSION
Master Ready master 172m v1.14.2
Node01 Ready 7m39s v1.14.2
Node02 Ready 48s v1.14.2
Node03 Ready 43s v1.14.2
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.