In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Tencent TBase is a high-performance HTAP database developed by Tencent, which provides high-performance OLTP and OLAP capabilities, while ensuring scalable global consistent distributed transactions (ACID), providing users with highly consistent distributed database services and high-performance data warehouse services. On the one hand, it solves the problems of insufficient expansion of traditional database, strict consistency of database transactions after data sharding, data security, cross-regional disaster recovery and so on. At the same time, it has the capabilities of high-performance transaction processing, data governance, mixed load support and so on.
In the aspect of OLTP, TBase uses MVCC+ global clock + 2PC+SSI to implement globally consistent distributed transactions, and introduces a large number of performance optimization designs to reduce the overhead of global transactions. On small clusters, TBase can provide more than 3 million TPCC Total transaction throughput (industry standard TPM test set).
The transaction is completed in milliseconds.
TBase has covered benchmark users in many industries, including internal support for Wechat advertising, WeChat Pay, Tencent Map and other massive data business, a transaction can be completed in milliseconds, supporting WeChat Pay's 50-fold transaction growth.
TBase is a relational database cluster platform that provides write reliability and multi-master node data synchronization. You can configure TBase on one or more hosts, and TBase data is stored on multiple physical hosts. There are two ways to store data tables, which are distributed or replicated. When sending query SQL to TBase, TBase will automatically issue query statements to the data node and get the final results.
TBase adopts distributed cluster architecture (as shown in the following figure), which is distributed in share nothing mode. Nodes are independent of each other and deal with their own data. The processed results may be summarized to the upper layer or transferred among nodes, and each processing unit communicates through network protocols. Parallel processing and expansion capabilities are better, which also means that only a simple x86 server can deploy TBase database cluster.
Here is a brief interpretation of the three modules of TBase:
Coordinator: coordination node (CN for short)
The business access portal is responsible for data distribution and query planning. Multiple nodes are located in a peer-to-peer manner, and each node provides the same database view. Functionally, the CN only stores the global metadata of the system, not the actual business data.
Datanode: data node (DN for short)
Each node also stores fragments of business data functionally, and the DN node is responsible for completing the execution request to coordinate the distribution of the node.
GTM: global transaction Manager (Global Transaction Manager)
Responsible for managing cluster transaction information, as well as managing cluster global objects, such as sequences, etc.
Next, let's look at how to build a TBase cluster environment from the source code.
TBase source code compilation installation 1. Create a tbase user
Note: all machines that need to install TBase clusters need to create
Mkdir / datauseradd-d / data/tbase tbase2. Source code acquisition
Git clone https://github.com/Tencent/TBase
3. Source code compiles cd ${SOURCECODE_PATH} rm-rf ${INSTALL_PATH} / tbase_bin_v2.0chmod + x configure*./configure-- prefix=$ {INSTALL_PATH} / tbase_bin_v2.0-- enable-user-switch-- with-openssl-- with-ossp-uuid CFLAGS=-gmake cleanmake-sjmake installchmod + x contrib/pgxc_ctl/make_signaturecd contribmake-sjmake install
In the use environment of this article, the above two parameters are as follows
${SOURCECODE_PATH} = / data/tbase/TBase-master
${INSTALL_PATH} = / data/tbase/install
4. Cluster installation
4.1) Cluster planning
The following is a cluster of 1GTM master, 1GTM backup, 2CN master (peer-to-peer between CN hosts, so there is no need for backup CN), 2DN master and 2DN backup cluster on two servers. This cluster is the minimum configuration with disaster recovery capability.
Machine 1RV 10.215.147.158 Machine 2Rule 10.240.138.159
The cluster plan is as follows:
4.2) ssh mutual trust configuration between machines
Refer to Linux ssh mutual trust configuration
4.3) configuration of environment variables
All machines in the cluster need to be configured
[tbase@TENCENT64] $vim ~ / .bashrcexport TBASE_HOME=/data/tbase/install/tbase_bin_v2.0export PATH=$TBASE_HOME/bin:$PATHexport LD_LIBRARY_PATH=$TBASE_HOME/lib:$ {LD_LIBRARY_PATH}
Above, the basic environment has been configured, and you can enter the cluster initialization stage. For the convenience of users, TBase provides special configuration and operation tools: pgxc_ctl to help users quickly build and manage the cluster, first of all, you need to write the ip, port and directory of the node mentioned above into the configuration file pgxc_ctl.conf.
4.4) initialize the pgxc_ctl.conf file [tbase@TENCENT64 ~] $mkdir / data/tbase/pgxc_ CTL [tbase @ TENCENT64] $cd / data/tbase/pgxc_ CTL [tbase @ TENCENT64 ~ / pgxc_ctl] $vim pgxc_ctl.conf
The following is the pgxc_ctl.conf file written in combination with the IP, port, database directory, binary directory and other plans described above. In practice, you only need to configure it according to your own actual situation.
#! / bin/bashpgxcInstallDir=/data/tbase/install/tbase_bin_v2.0pgxcOwner=tbasedefaultDatabase=postgrespgxcUser=$pgxcOwnertmpDir=/tmplocalTmpDir=$tmpDirconfigBackup=nconfigBackupHost=pgxc-linkerconfigBackupDir=$HOME/pgxcconfigBackupFile=pgxc_ctl.bak#---- GTM-gtmName=gtmgtmMasterServer=10.215.147.158gtmMasterPort=50001gtmMasterDir=/data/tbase/data/gtmgtmExtraConfig=nonegtmMasterSpecificExtraConfig=nonegtmSlave=ygtmSlaveServer=10.240.138.159gtmSlavePort=50001gtmSlaveDir=/data/tbase/data/gtmgtmSlaveSpecificExtraConfig=none#---- Coordinators-coordMasterDir=/data/tbase/data/coordcoordMasterDir=/data/tbase/data/coordcoordArchLogDir=/data/tbase/data/coord_archlogcoordNames= (cn001 cn002) coordPorts= (30004 30004) poolerPorts= (31110 31110) coordPgHbaEntries= (0.0.0.0 coordArchLogDir 0) coordMasterServers= (10.215.147.158 10.240.138.159) coordMasterDirs= ($coordMasterDir $coordMasterDir) coordMaxWALsernder=2coordMaxWALSenders= ($coordMaxWALsernder $coordMaxWALsernder) coordSlave=ncoordSlaveSync=ncoordArchLogDirs= ($coordArchLogDir $coordArchLogDir) coordExtraConfig=coordExtraConfigcat > $coordExtraConfig
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.