In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Centos7.5 installation configuration Greenplum5.10.2 (production environment)
Service introduction: Greenplum Master
Master only stores system metadata, and all business data is distributed on Segments. As the entrance of the whole database system, it is responsible for establishing the connection with the client, parsing SQL and forming the execution plan, distributing tasks to Segment instances, and collecting the execution results of Segment.
Master keeps consistent with Primary Master's catalog and transaction log through synchronization process, and when Primary Master fails, Standby Master takes on all the work of Master.
Segment
There can be multiple Segment,Segment in Greenplum, which is mainly responsible for the storage and access of business data (figure 3). Users query the execution of SQL, and each Segment stores part of user data, but users cannot access Segment directly. All access to Segment must go through Master. When accessing data, all Segment first process the data related to themselves in parallel. If you need to associate and process the data on other Segment, Segment can transmit the data through Interconnect. The more Segment nodes, the more scattered the data will be, and the faster the processing speed will be. So unlike Share All database clusters, Greenplum performance increases linearly by increasing the number of Segment node servers.
Interconnect
Interconnect is the network layer in the Greenplum architecture (figure 4) and is the main component of the GPDB system. By default, the UDP protocol is used, but Greenplum validates the packets, so the reliability is equal to that of TCP, but the performance is better. In the case of using the TCP protocol, the instance of Segment cannot exceed 1000, but there is no such restriction when using UDP.
Installation version information: Linux gpnode615.kjh.com 3.10.0-693.el7.x86_64 # 1 SMP Tue Aug 22 21:09:27 UTC 2017 x86 "64 GNU/LinuxCentOS Linux release 7.5.1804 (Core) Greenplum Version:5.10.2PostgreSQL: 8.3.23 introduction to cluster nodes |-hostname-|-- business IP address |-role-- |-disk--- |-- Cpu/mem- |-network- | | cndh2321-6-11 | 172.20.6.11 | Segment | ssd 960G * 2 | 16c/64G | 10G | | cndh2321-6-12 | 172.20.6.12 | Segment | ssd 960G * 2 | 16c/64G | 10G | cndh2321-6-13 | 172.20.6.13 | Segment | ssd 960g * 2 | 16c/64G | 10G | cndh2321-6-14 | 172.20.6.14 | Segment | ssd 960g * 2 | 16c/64G | 10G | | cndh2322- 6-15 | 172.20.6.15 | gp-master | ssd 960G * 2 | 16c/64G | 10G | | cndh2322- | 6-16 | 172.20.6.16 | gp-standby | ssd 960G * 2 | 16c/64G | 10G | deployment cluster environment preparation: set hostname (all nodes in the cluster): hostnamectl set-hostname-- static gpnode615.kjh.com & & hostname ssh 172.20.6.16 hostnamectl set-hostname-- static gpnode616.kjh.com & & ssh 172.20.6.16 hostname ssh 172.20.6.11 hostnamectl set-hostname-- static gpnode611.kjh.com & & ssh 172.20.6. 11 hostname ssh 172.20.6.12 hostnamectl set-hostname-- static gpnode612.kjh.com & & ssh 172.20.6.12 hostname ssh 172.20.6.13 hostnamectl set-hostname-- static gpnode613.kjh.com & & ssh 172.20.6.13 hostname ssh 172.20.6.14 hostnamectl set-hostname-- static gpnode614.kjh.com & & ssh 172.20.6.14 hostname create the installation directory:
Mkdir-p / workspace/gpdb
Download the package greenplum,greenplum-cc-web:
Note: only after you log in to your Pivotal Network account can you download:
Address: https://network.pivotal.io/products/pivotal-gpdb/#/releases/158026/file_groups/1083
Internal download address:
Wget dl.#kjh#.com/greenplum-db-5.10.2-rhel7-x86_64.rpm
Create a list of hosts (master node): cat > / workspace/gpdb/gp-all.txt
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.