In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to deploy a TiDB local cluster from 0 to 1". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to deploy a TiDB local cluster from 0 to 1".
TiDB is an open source NewSQL database. Let's take a look at the official description:
TiDB is an open source distributed relational database independently designed and developed by PingCAP. It is a hybrid distributed database product that supports both online transaction processing and online analytical processing (Hybrid Transactional and Analytical Processing, HTAP). It has important features such as horizontal capacity expansion or reduction, financial level high availability, real-time HTAP, cloud native distributed database, compatibility with MySQL 5.7protocol and MySQL ecology. The goal is to provide users with one-stop OLTP (Online Transactional Processing), OLAP (Online Analytical Processing), HTAP solutions. TiDB is suitable for various application scenarios such as high availability, strong consistency, large data scale, and so on.
There are several key points here:
Distributed relational database
Compatible with MySQL5.7
Support for HTAP (online transaction processing and online analytical processing)
Good support for the financial industry, supporting high availability, strong consistency and big data scenarios
Basic concept
Here are some important concepts in TiDB:
PD:Placement Driver, a master control node of TiDB, is responsible for the overall scheduling of the cluster, the generation of global ID, and the generation of global timestamp TSO (centralized timing). That is, the global clock is implemented on this node.
The storage layer of TiKV:TiDB is a distributed transactional key-value database that satisfies ACID transactions, uses Raft protocol to ensure the consistency of multiple copies, and stores statistical data
The key component of TiFlash:HTAP form, which is the inventory extension of TiKV, not only provides good isolation, but also takes into account strong consistency.
Monitor:TiDB monitoring component
Experimental environment
Due to the limitations of my local resources, we use a rapid deployment approach.
There are two ways to deploy TiDB quickly:
First: use TiUP Playground to quickly deploy a local test environment
Applicable scenario: quickly deploy TiDB clusters using local Mac or stand-alone Linux environment. You can experience the basic architecture of TiDB clusters, as well as the operation of basic components such as TiDB, TiKV, PD, and monitoring.
Second: use TiUP cluster to simulate the deployment steps of a production environment on a single machine
You want to experience a cluster with the smallest full topology of TiDB with a single Linux server, and simulate the deployment steps of production.
I use the second way here.
According to the official description, TiDB has done a lot of testing on CentOS 7.3, and it is recommended to deploy above CentOS 7.3.
Local environment: VMware virtual machine, operating system CentOS7.6
Start deployment
We follow the official steps to install
1. Turn off the firewall
Systemctl stop firewalld service iptables stop
two。 Download and install TiUP with the following commands and results
[root@master ~] # curl-- proto'= https'-- tlsv1.2-sSf https://tiup-mirrors.pingcap.com/install.sh | sh% Total% Received% Xferd Average Speed Time Current Dload Upload Total Spent Left Speed 100 8697k 100 8697k 00 4316k 00: 00:02 0:00:02--:-4318k WARN: adding root certificate Via internet: https://tiup-mirrors.pingcap.com/root.json You can revoke this by remove / root/.tiup/bin/7b8e153f2e2d0928.root.json Set mirror to https://tiup-mirrors.pingcap.com success Detected shell: bash Shell profile: / root/.bash_profile / root/.bash_profile has been modified to add tiup to PATH open a new terminal or source / root/.bash_profile to use it Installed path: / root/.tiup/bin/tiup = = Have a try: tiup playground =
3. Install the cluster component of TiUP
Declare the global environment variable first, or you won't find the tiup command:
Source .bash _ profile
Execute the install cluster command:
Tiup cluster
The output is as follows:
[root@master ~] # tiup cluster The component `cluster` is not installed; downloading from repository. Download https://tiup-mirrors.pingcap.com/cluster-v1.3.1-linux-amd64.tar.gz 10.05 MiB / 10.05 MiB 100.00% 13.05 MiB pact s Starting component `cluster`: / root/.tiup/components/cluster/v1.3.1/tiup-cluster Deploy a TiDB cluster for production Usage: tiup cluster [command] Available Commands: check Perform preflight checks for the cluster. Deploy Deploy a cluster for production start Start a TiDB cluster stop Stop a TiDB cluster restart Restart a TiDB cluster scale-in Scale in a TiDB cluster scale-out Scale out a TiDB cluster destroy Destroy a specified cluster clean (EXPERIMENTAL) Cleanup a specified cluster upgrade Upgrade a specified TiDB cluster exec Run shell command on host in the tidb cluster display Display information of a TiDB cluster prune Destroy and remove instances that is in tombstone State list List all clusters audit Show audit log of cluster operation import Import an exist TiDB cluster from TiDB-Ansible edit-config Edit TiDB cluster config. Will use editor from environment variable `EDITOR`, default use vi reload Reload a TiDB cluster's config and restart if needed patch Replace the remote package with a specified package and restart the service rename Rename the cluster enable Enable a TiDB cluster automatically at boot disable Disable starting a TiDB cluster automatically at boot help Help about any command Flags:-h,-- help help for tiup-- ssh string (EXPERIMENTAL) The executor type: 'builtin',' system' 'none'. -- ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)-v,-- version version for tiup-- wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)-y,-- yes Skip all confirmations and assumes' yes' Use "tiup cluster help [command]" for more information about a command.
4. Increase the connection limit for sshd services
You need to use root permission to modify the parameter configuration below the / etc/ssh/sshd_config file:
MaxSessions 20
Restart sshd after modification:
[root@master ~] # service sshd restart Redirecting to / bin/systemctl restart sshd.service
5. Edit the cluster configuration template file
This file is named topo.yaml and its contents are as follows:
# # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. Global: user: "tidb" ssh_port: 22 deploy_dir: "/ tidb-deploy" data_dir: "/ tidb-data" # # Monitored variables are applied to all the machines. Monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers:-host: 192.168.59.146 tidb_servers: -host: 192.168.59.146 tikv_servers:-host: 192.168.59.146 port: 20160 status_port: 20180 config: server.labels: {host: "logic-host-1"} #-host: 192.168.59.146 # port: 20161 # status_port: 20181 # config: # server.labels: {host: "logic-host-2"} #-host: 192.168. 59.146 # port: 20162 # status_port: 20182 # config: # server.labels: {host: "logic-host-3"} tiflash_servers:-host: 192.168.59.146
Here are two points to note:
The host in the file is the server ip where the TiDB is deployed
Ssh_port defaults to 22
The tikv_servers of the official file is 3 nodes. I have set it to only 1 node here, because only one node can start successfully when multiple nodes are configured locally.
6. Deploy the cluster
The command to deploy the cluster is as follows:
Tiup cluster deploy. / topo.yaml-- user root-p
The above cluster-name is the cluster name, and tidb-version refers to the TiDB version number, which can be checked through the tiup list tidb command. V3.1.2 is used here, and the cluster name is mytidb-cluster. The command is as follows:
Tiup cluster deploy mytidb-cluster v3.1.2. / topo.yaml-- user root-p
The following is the log output at deployment time:
[root@master] # tiup cluster deploy mytidb-cluster v3.1.2. / topo.yaml-- user root-p Starting component `cluster`: / root/.tiup/components/cluster/v1.3.1/tiup-cluster deploy mytidb-cluster v3.1.2. / topo.yaml-- user root-p Please confirm your topology: Cluster type: tidb Cluster name: mytidb-cluster Cluster version: v3.1.2 Type Host Ports OS/Arch Directories-pd 192.168.59.146 2379 linux/x86_64 / tidb-deploy/pd-2379 / tidb-data/pd-2379 tikv 192.168.59.146 20160/20180 linux/x86_64 / tidb-deploy/tikv-20160,/tidb-data/tikv-20160 tidb 192.168.59.146 4000/10080 linux/x86_64 / tidb-deploy/tidb-4000 tiflash 192.168.59.146 9000/8123/3930/20170/20292/8234 linux/x86_64 / tidb-deploy/tiflash-9000 / tidb-data/tiflash-9000 prometheus 192.168.59.146 9090 linux/x86_64 / tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090 grafana 192.168.59.146 3000 linux/x86_64 / tidb-deploy/grafana-3000 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue? [y Input SSH password N]: + Generate SSH keys... Done + Download TiDB components-Download pd:v3.1.2 (linux/amd64). Done-Download tikv:v3.1.2 (linux/amd64)... Done-Download tidb:v3.1.2 (linux/amd64)... Done-Download tiflash:v3.1.2 (linux/amd64)... Done-Download prometheus:v3.1.2 (linux/amd64)... Done-Download grafana:v3.1.2 (linux/amd64)... Done-Download node_exporter:v0.17.0 (linux/amd64)... Done-Download blackbox_exporter:v0.12.0 (linux/amd64)... Done + Initialize target host environments-Prepare 192.168.59.146 Prepare 22. Done + Copy files-Copy pd-> 192.168.59.146. Done-Copy tikv-> 192.168.59.146. Done-Copy tidb-> 192.168.59.146. Done-Copy tiflash-> 192.168.59.146. Done-Copy prometheus-> 192.168.59.146. Done-Copy grafana-> 192.168.59.146. Done-Copy node_exporter-> 192.168.59.146. Done-Copy blackbox_exporter-> 192.168.59.146. Done + Check status Enabling component pd Enabling instance pd 192.168.59.146 Done 2379 Enable pd 192.168.59.146 Enable pd 2379 success Enabling component node_exporter Enabling component blackbox_exporter Enabling component tikv Enabling instance tikv 192.168.59.146 Enable tikv 20160 Enable tikv 192.168.59.146 Enable tikv 20160 success Enabling component tidb Enabling instance tidb 192.168.59.146 success Enabling component node_exporter Enabling component blackbox_exporter Enabling component tikv Enabling instance tikv 4000 Enable tidb 192.168.59.146 Component tiflash Enabling instance tiflash 192.168.59.146 component tiflash Enabling instance tiflash 9000 Enable tiflash 192.168.59.146 success Enabling component prometheus Enabling instance prometheus 192.168.59.146 success Enabling component prometheus Enabling instance prometheus 9090 Enable prometheus 192.168.59.146 Enable prometheus 192.168.59.146 success Enabling component grafana Enabling instance grafana 3000 Enable grafana 192.168.59.1463000 success Cluster `mytidb- cluster` deployed successfully You can start it with command: `tiup cluster start mytidb- cluster`
7. Start the cluster
The command is as follows:
Tiup cluster start mytidb-cluster
The startup success log is as follows:
[root@master ~] # tiup cluster start mytidb-cluster Starting component `cluster`: / root/.tiup/components/cluster/v1.3.1/tiup-cluster start mytidb-cluster Starting cluster mytidb-cluster... + [Serial]-SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa.pub + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb Host=192.168.59.146 + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb Host=192.168.59.146 + [Serial]-StartCluster Starting component pd Starting instance pd 192.168.59.146 StartCluster Starting component pd Starting instance pd 2379 Start pd 192.168.59.146 StartCluster Starting component pd Starting instance pd 2379 success Starting component node_exporter Starting instance 192.168.59.146 Start 192.168.59.146 success Starting component blackbox_exporter Starting instance 192.168.59.146 Start 192.168.59.146 success Starting component tikv Starting instance Tikv 192.168.59.146:20160 Start tikv 192.168.59.146:20160 success Starting component tidb Starting instance tidb 192.168.59.146:4000 Start tidb 192.168.59.146:4000 success Starting component tiflash Starting instance tiflash 192.168.59.146:9000 Start tiflash 192.168.59.146:9000 success Starting component prometheus Starting instance prometheus 192.168.59.146:9090 Start Prometheus 192.168.59.146 success 9090 success Starting component grafana Starting instance grafana 192.168.59.146 Start grafana 3000 success + [Serial]-UpdateTopology: cluster=mytidb-cluster Started cluster `mytidb- cluster` successfully
8. Access the database
Because TiDB supports mysql client access, we use sqlyog to log in to TiDB, username root, password empty, address 192.168.59.149, port 4000, as shown below:
The successful login is shown in the following figure. On the left, we can see some tables that come with TiDB:
9. Access Grafana monitoring of TiDB
The access address is as follows:
Http://192.168.59.146:3000/login
Initial user name / password: admin/admin. Log in and change the password. After success, the page is as follows:
10.dashboard
The TiDB v3.x version does not have dashboard,v4.0 to join. The access address is as follows:
Http://192.168.59.146:2379/dashboard
11. View the list of clusters
Command: tiup cluster list, the result is as follows:
[root@master /] # tiup cluster list Starting component `cluster`: / root/.tiup/components/cluster/v1.3.1/tiup-cluster list Name User Version Path PrivateKey- -mytidb-cluster tidb v3.1.2 / root/.tiup/storage/cluster/clusters/mytidb-cluster/ root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa
twelve。 View the cluster topology
The command is as follows:
Tiup cluster list
After entering the command, the output of my local cluster is as follows:
[root@master /] # tiup cluster list Starting component `cluster`: / root/.tiup/components/cluster/v1.3.1/tiup-cluster list Name User Version Path PrivateKey- -mytidb-cluster tidb v3.1.2 / root/.tiup/storage/cluster/clusters/mytidb-cluster/ root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa You have new mail in / var/spool/mail/root [root@master /] # tiup cluster display mytidb-cluster Starting component `cluster`: / root/.tiup/components/cluster/v1.3.1/tiup-cluster display mytidb-cluster Cluster type: tidb Cluster Name: mytidb-cluster Cluster version: v3.1.2 SSH type: builtin ID Role Host Ports OS/Arch Status Data Dir Deploy Dir- -192.168.59.146 grafana 3000 grafana 192.168.59.146 3000 linux/x86_64 Up-/ tidb-deploy/ Grafana-3000 192.168.59.146 grafana-3000 2379 pd 192.168.59.146 2379 linux/x86_64 Up | L / tidb-data/pd-2379 / tidb-deploy/pd-2379 192.168.59.146 tidb-deploy/pd-2379 9090 prometheus 192.168.59.146 9090 linux/x86_64 Up / tidb-data/prometheus-9090 / Tidb-deploy/prometheus-9090 192.168.59.146 4000 tidb 192.168.59.146 4000 linux/x86_64 Up-/ tidb-deploy/tidb-4000 192.168.59.146 9000 tiflash 192.168.59.146 9000 tiflash -9000 / tidb-deploy/tiflash-9000 192.168.59.146 tidb-deploy/tikv-20160 Total nodes 20160 tikv 192.168.59.146 20160 linux/x86_64 Up / tidb-data/tikv-20160 / tidb-deploy/tikv-20160 Total nodes: 6
Problems encountered
When TiDB v4.0.9 is installed, the deployment can be successful, but the startup error is reported. If 3 nodes are configured in the topo.yaml and the startup error is reported, only one tikv can be started successfully. The log is as follows:
[root@master ~] # tiup cluster start mytidb-cluster Starting component `cluster`: / root/.tiup/components/cluster/v1.3.1/tiup-cluster start mytidb-cluster Starting cluster mytidb-cluster... + [Serial]-SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb-cluster/ssh/id_rsa.pub + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb Host=192.168.59.146 + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb, host=192.168.59.146 + [Parallel]-UserSSH: user=tidb Host=192.168.59.146 + [Serial]-StartCluster Starting component pd Starting instance pd 192.168.59.146 StartCluster Starting component pd Starting instance pd 2379 Start pd 192.168.59.146 StartCluster Starting component pd Starting instance pd 2379 success Starting component node_exporter Starting instance 192.168.59.146 Start 192.168.59.146 success Starting component blackbox_exporter Starting instance 192.168.59.146 Start 192.168.59.146 success Starting component tikv Starting instance Tikv 192.168.59.146:20162 Starting instance tikv 192.168.59.146:20160 Starting instance tikv 192.168.59.146:20161 Start tikv 192.168.59.146:20162 success Error: failed to start tikv: failed to start: tikv 192.168.59.146:20161 Please check the instance's log (/ tidb-deploy/tikv-20161/log) for more detail.: timed out waiting for port 20161 to be started after 2m0s Verbose debug logs has been written to / root/.tiup/logs/tiup-cluster-debug-2021-01-05-19-58-46.log. Error: run `/ root/.tiup/components/cluster/v1.3.1/tiup- cluster` (wd:/root/.tiup/data/SLGrLJI) failed: exit status 1
Check the log file / tidb-deploy/tikv-20161/log/tikv.log and indicate that the files cannot be found in the following 2 directories:
[2021-01-06 05Result::unwrap 48Result::unwrap 44.231-05:00] [FATAL] [lib.rs:482] ["called `Result::unwrap () `on an `Err` value: Os {code: 2, kind: NotFound Message:\ "No such file or directory\"} "] [backtrace=" stack backtrace:\ n 0: tikv_util::set_panic_hook:: {{closure}}\ n at components/tikv_util/src/lib.rs:481\ n 1: std::panicking::rust_panic_with_hook\ n at src/libstd/panicking.rs:475\ n 2: rust_begin_unwind\ n At src/libstd/panicking.rs:375\ n3: core::panicking::panic_fmt\ nat src/libcore/panicking.rs:84\ n4: core::result::unwrap_failed\ nat src/libcore/result.rs:1188\ n5: core::result::Result::unwrap\ nat / rustc/0de96d37fbcc54978458c18f5067cd9817669bc8/src/libcore/result.rs:956\ ncmd:: Server::TiKVServer::init_fs\ nat cmd/src/server.rs:310\ ncmd::server::run_tikv\ nat cmd/src/server.rs:95\ n6: tikv_server::main\ nat cmd/src/bin/tikv-server.rs:166\ n7: std::rt::lang_start:: {{closure}}\ n At / rustc/0de96d37fbcc54978458c18f5067cd9817669bc8/src/libstd/rt.rs:67\ n 8: main\ n 9: _ _ libc_start_main\ n 10:\ n "] [location=src/libcore/result.rs:1188] [thread_name=main]
If you configure a node, the startup still fails, and we intercept the second half of the startup log:
Starting component pd Starting instance pd 192.168.59.146:2379 Start pd 192.168.59.146:2379 success Starting component node_exporter Starting instance 192.168.59.146 Start 192.168.59.146 success Starting component blackbox_exporter Starting instance 192.168.59.146 Start 192.168.59.146 success Starting component tikv Starting instance tikv 192.168.59.146:20160 Start tikv 192.168.59.146:20160 success Starting component tidb Starting instance tidb 192.168.59.146:4000 Start tidb 192.168.59.146:4000 success Starting component tiflash Starting instance tiflash 192.168.59.146:9000 Error: failed to start tiflash: failed to start: tiflash 192.168.59.146:9000 Please check the instance's log (/ tidb-deploy/tiflash-9000/log) for more detail.: timed out waiting for port 9000 to be started after 2m0s Verbose debug logs has been written to / root/.tiup/logs/tiup-cluster-debug-2021-01-06-20-02-13.log.
The file in / tidb-deploy/tiflash-9000/log is as follows:
[2021-01-06 20 INFO 06starting working thread 26.207-05:00] [INFO] [mod.rs:335] ["starting working thread"] [worker=region-collector-worker] [2021-01-06 20 starting working thread 27.130-05:00] [FATAL] [lib.rs:482] ["called `Result::unwrap () `Err` value: Os {code: 2, kind: NotFound" Message:\ "No such file or directory\"} "] [backtrace=" stack backtrace:\ n 0: tikv_util::set_panic_hook:: {{closure}}\ n 1: std::panicking::rust_panic_with_hook\ n at src/libstd/panicking.rs:475\ n 2: rust_begin_unwind\ n at src/libstd/panicking.rs:375\ n 3: core::panicking:: Panic_fmt\ nat src/libcore/panicking.rs:84\ n4: core::result::unwrap_failed\ nat src/libcore/result.rs:1188\ n5: cmd::server::run_tikv\ n6: run_proxy\ n7: operator ()\ nat / home/jenkins/agent/workspace/optimization-build-tidb-linux-amd/tics/dbms/src/Server/Server .CPP: 415\ n 8: execute_native_thread_routine\ nat.. / libstdc++-v3/src/c++11/thread.cc:83\ n 9: start_thread\ n 10: _ _ clone\ n "] [location=src/libcore/result.rs:1188] [thread_name=]
Tried v4.0.1, the same problem, all reported the error that the file could not be found.
Thank you for reading, the above is the content of "how to deploy a set of TiDB local cluster from 0 to 1". After the study of this article, I believe you have a deeper understanding of how to deploy a set of TiDB local cluster from 0 to 1, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.