In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to deploy multi-node Kubernetes clusters and KubeSphere container platforms in Linux. The content of the article is of high quality, so the editor shares it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.
KubeSphere is an application-centric enterprise container platform built on Kubernetes, all of which provide users with an easy-to-use interface and guided operation mode. At the same time, KubeSphere Installer provides the function of rapidly deploying production-level Kubernetes clusters and full-stack container platforms, which can quickly help enterprise users quickly build an application-centric DevOps platform.
KubeSphere supports deployment and operation on any infrastructure, including public cloud, private cloud, VM, BM, and Kubernetes. It can be deployed on public cloud-hosted Kubernetes (such as GKE, EKS, ACK) or on privatized Kubernetes (such as kubeadm, K3s, RKE deployed clusters). At the same time, KubeSphere supports online installation and offline installation.
Use KubeSphere Installer to deploy a multi-node Kubernetes cluster on 3 Linux machines and turn on KubeSphere minimization installation.
Multi-Node is multi-node deployment. Before deployment, it is recommended that you select any node in the cluster as a task executor (taskbox) to perform deployment tasks for other nodes in the cluster to be deployed, and Taskbox should be able to communicate with other nodes to be deployed with ssh.
KubeSphere 2.1 only enables minimum installation by default. Installer already supports custom installation of various pluggable functional components. Users can choose the required components according to their business needs and machine configuration. Make sure that the machine resources meet the minimum requirements before enabling the pluggable components. Please refer to the installation of optional components.
The installation time is related to network conditions and bandwidth, machine configuration, number of installation nodes and other factors. You can speed up the installation speed by increasing the bandwidth or configuring an image accelerator before installation.
Description:
This installation example is only used as a demonstration of rapid test deployment, so the default OpenEBS will be used to provide persistent storage service based on Local Volume. OpenEBS supports dynamic application for PV to facilitate deployment testing in scenarios where the storage server is not ready for initial installation. It is recommended to configure storage types supported by KubeSphere in the formal environment. Refer to the persistent storage configuration instructions.
Multi-node supports high availability configurations of Master and etcd nodes. In this example, only a single Master and a single etcd are deployed to facilitate rapid test and installation of multiple nodes. It is recommended to configure high availability of Master and etcd nodes in the formal environment. Please see the cluster high availability deployment configuration in the document.
prerequisite
Check whether the network firewall of the installation machine is turned off. If the firewall is not turned off, you need to open the relevant designated port. Refer to the port to be opened.
Step 1: prepare the host
Refer to the following node specifications to prepare at least 3 hosts that meet the requirements to start the deployment of multi-node mode. To prevent software version conflicts, it is recommended that you choose multiple clean machines to install.
Description:
All nodes need time synchronization, otherwise the installation may not be successful
If you use ubuntu 16.04, it is recommended to use its latest version 16.04.5.
If you use ubuntu 18.04, you need to use root users
If the sudo command is not installed on the Debian system, you need to use the root user to execute the apt update & & apt install sudo command to install the sudo command before installation
If you choose to install DevOps functional components, you need to ensure that there is a node with a memory greater than 8 GB, because the default JVM setting of Jenkins will require a whole block of memory of 6 gigabytes. If the available memory is insufficient, the node may crash.
Operating system minimum configuration (each unit) CentOS 7.5 (64 bit) CPU:2 core, memory: 4 GB, system disk: 40 GUbuntu 16.04 LTS (64 bit) CPU:2 core, memory: 4 G, system disk: 40 GRed Hat Enterprise Linux Server 7.4 (64 bit) CPU:2 core, memory: 4 G, system disk: 40 GDebian Stretch 9.5 (64 bit) CPU:2 core, memory: 4 GB, system disk: 40 GB
The following example describes the deployment of a multi-node environment in multi-node mode, which prepares three hosts of CentOS 7.5 and prepares the installation as a root user. Log in to the node with the hostname Master as the task executor Taskbox to perform the installation steps.
It has been described in the installation instructions that the KubeSphere cluster architecture is made up of a management node (Master) and a work node (Node). These three hosts deploy one Master node and two Node nodes respectively.
Suppose the host information is as follows:
Host IP hostname cluster role 192.168.0.1mastermasterjournal etcd192.168.0.2node1node192.168.0.3node2node
Cluster architecture: single master, single etcd, double node
Step 2: prepare to install the configuration file
1. Download the KubeSphere 2.1.0 installation package to the machine to be installed and enter the conf directory.
$curl-L https://kubesphere.io/download/stable/v2.1.0 > installer.tar.gz\ & & tar-zxf installer.tar.gz & & cd kubesphere-all-v2.1.0/conf
two。 Edit the host configuration file conf/hosts.ini. In order to centrally manage the configuration of the target machine and deployment process, each node in the cluster should refer to the following configuration in the host configuration file hosts.ini. It is recommended to use root users for installation.
Description:
If you install as a non-root user (such as a ubuntu user), the [all] section can be edited by referring to the non-root user examples section in the comments in the configuration file conf/hosts.ini.
If a root user cannot connect to another machine with ssh in taskbox, you also need to refer to the non-root user sample section in the comments of conf/hosts.ini, but it is recommended to switch to the root user when executing the installation script install.sh.
Master, node1, node2 are the hostnames of each node of the cluster. If you need to customize the hostname, all hostnames need to be in lowercase.
The following example uses the root user installation on CentOS 7.5, with each machine information on one line and cannot be divided into lines.
Root configuration hosts.ini example:
[all] master ansible_connection=local ip=192.168.0.1node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORDnode2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD [kube-master] master [kube-node] Node1node2 [etcd] mastery [k8s-cluster:children] kube-nodekube-master
Description:
[all]: the private network IP and host root user passwords of each node in the cluster need to be modified:
The node with the hostname "master" acts as the Taskbox that has been connected through SSH, so there is no need to fill in the password.
The parameters of the Node node, such as ansible_host and ip of node1 and node2, are replaced with the private network IP of the current node1 and node2, and the ansible_ssh_pass is replaced with the root user password of node1 and node2 accordingly.
Parameter explanation:
-ansible_connection: connection type to the host, set here to local, that is, local connection-ansible_host: address or domain name of the host to be connected in the cluster-ip: host to be connected in the cluster IP-ansible_user: default SSH user name (non-root), for example, ubuntu-ansible_become_pass: default SSH user login password-ansible_ssh_pass: password of the root user of the host to be connected
[kube-master] and [etcd]: enter the hostname "master" in the [kube-master] and [etcd] sections. The "master" node is used as a taskbox to perform the installation task of the entire cluster. At the same time, the "master" node in the KubeSphere cluster architecture will also be used as a Master node to manage the cluster and the etcd node will be responsible for saving the cluster data.
[kube-node]: enter the hostname "node1" and "node2" in the [kube-node] section as the node node of the KubeSphere cluster.
[local-registry]: this parameter value in the offline installation package indicates which node is set as the local image repository. The default value is master node. It is recommended to mount the / mnt/registry to this node separately (refer to the fdisk command) so that the image can be stored in persistent storage and save machine space.
Step 3: install KubeSphere
KubeSphere multi-node deployment automates environment and file monitoring, installation of platform-dependent software, automated deployment of Kubernetes and etcd clusters, and automated configuration of storage. The default version of Kubernetes installed by Installer is v1.15.5.
Description:
Usually you don't need to modify any configuration, just install it.
The network plug-in defaults to calico, and the storage default uses OpenEBS to provide persistent storage service based on Local Volume. If you need custom installation parameters, such as network, storage, load balancer plug-in, optional feature components and other related configurations, you need to specify or modify them in the conf/common.yaml file. Refer to the cluster component configuration instructions.
Supported storage types: GlusterFS, Ceph RBD, NFS, Local Volume, QingCloud cloud platform block storage (QingCloud public cloud single node hanging disk is limited to 10), QingStor NeonSAN. For more information about storage configuration, please see the storage configuration instructions.
Since the default value of the Cluster IP subnet network segment of the Kubernetes cluster is 10.233.0.0, the default of the subnet network segment of the pod is 10.233.64.0, so the IP address range of the node on which KubeSphere is installed should not be duplicated with the above two network segments. In case of address range conflicts, modify the parameters of kube_service_addresses or kube_pods_subnet in the configuration file conf/common.yaml.
Refer to the following steps to start the multi-node deployment.
Note: as the installation time of multi-node is related to network conditions and bandwidth, machine configuration, number of installed nodes and other factors, the time standard is not available here.
1. To enter the installation directory, it is recommended that you use the root user to execute the install.sh installation script:
$cd scripts$. / install.sh
two。 Enter the number 2 to select the second Multi-node mode to start the deployment, and the installer will prompt you whether your environment is a prerequisite. If so, enter "yes" to start the installation.
# KubeSphere Installer Menu###* 1) All-in-one* 2) Multi-node* 3) Quit## https://kubesphere.io/ 2018-10-14###Please input an option: 2
3. Verify that the KubeSphere cluster deployment is successful:
(1) after the installation script is executed, when you see the following "Successful" interface, the KubeSphere installation is successful.
Successsful!### Welcome to KubeSphere! # # Console: http://192.168.0.1:30880Account: adminPassword: P@88w0rdNOTE:Please modify the default password after login.###
Tip: if you need to view the above interface information again, you can view it by executing the cat kubesphere/kubesphere_running command under the installation package directory.
(2) if you need to access the public network, you need to forward the private network port 30880 to the source port 30880 in the port forwarding rules on the cloud platform, and then open the source port at the firewall to ensure that public network traffic can pass through the port.
(3) after the installation is successful, the browser accesses the corresponding URL, such as http://{$ public network IP}: 30880, to enter the KubeSphere login interface. You can log in to the KubeSphere console with the default user name and password. Change the default password immediately after logging in. See getting started to help you get started with KubeSphere quickly.
Note: after logging in to Console, please check the monitoring status of service components in "Cluster status". You can start using it after all components have been started. Usually, all service components will be started within 15 minutes.
UI Quick View
KubeSphere (https://github.com/kubesphere/kubesphere) is an open source application-centric container management platform that supports deployment on any infrastructure and provides easy-to-use UI, which greatly reduces the complexity of daily development, testing, operation and maintenance, and aims to solve the storage, network, security and ease of use of Kubernetes itself. Help enterprises easily cope with business scenarios such as agile development and automatic monitoring and maintenance, end-to-end application delivery, micro-service governance, multi-tenant management, multi-cluster management, service and network management, image warehouse, AI platform, edge computing and so on.
This is the end of how to deploy a multi-node Kubernetes cluster in Linux and share with the KubeSphere container platform. I hope the above content can be of some help and learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.