In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Practice of greenplum Cluster installation and adding Node production Environment
1. Prepare the environment
1.1 Cluster introduction
System environment: centos6.5
Database version: greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.zip
In the greenplum cluster, the IP of the two machines is
[root@BI-greenplum-01 ~] # cat / etc/hosts
127.0.0.1 localhost localhost.localdomain
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.201 BI-greenplum-01
192.168.10.202 BI-greenplum-02
1.2 create users and user groups (per machine)
[root@BI-greenplum-01] # groupadd-g 530 gpadmin
[root@BI-greenplum-01] # useradd-g 530-u530-m-d / home/gpadmin-s / bin/bash gpadmin
[root@BI-greenplum-01 ~] # passwd gpadmin
Changing password for user gpadmin.
New password:
BAD PASSWORD: it is too simplistic/systematic
BAD PASSWORD: is too simple
Retype new password:
Passwd: all authentication tokens updated successfully.
1.3 configure kernel parameters and add the following:
Vi / etc/sysctl.conf
# By greenplum
Net.ipv4.ip_forward = 0
Net.ipv4.conf.default.accept_source_route = 0
Kernel.sysrq = 1
Kernel.core_uses_pid = 1
Net.ipv4.tcp_syncookies = 1
Kernel.msgmnb = 65536
Kernel.msgmax = 65536
Kernel.sem = 250 64000 100 512
Kernel.shmmax = 500000000
Kernel.shmmni = 4096
Kernel.shmall = 4000000000
Kernel.sem = 250 64000 100 512
Net.ipv4.tcp_tw_recycle=1
Net.ipv4.tcp_max_syn_backlog=4096
Net.core.netdev_max_backlog=10000
Vm.overcommit_memory=2
Net.ipv4.conf.all.arp_filter = 1
The above parameters can be modified according to your own system configuration.
Execute the command manually to make the parameter take effect
[root@BI-greenplum-01] # sysctl-p
Add the following configuration to the limits.conf file
[root@BI-greenplum-01 ~] # vi / etc/security/limits.conf
# End of file
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
2.greenplum installation
2.1 install dependency packages, including adding packages required by nodes
Yum-y install ed openssh-clients gcc gcc-c++ make automake autoconf libtool perl rsync coreutils glib2 lrzsz sysstat e4fsprogs xfsprogs ntp readline-devel zlib zlib-devel unzip
Note: greenplum depends on ed, otherwise it cannot be initialized successfully
2.2 first prepare the installation file (operate on MASTER 192.168.10.201)
[root@BI-greenplum-01 ~] # unzip greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.zip
[root@BI-greenplum-01 ~] #. / greenplum-db-4.3.3.1-build-1-RHEL5-x86_64.bin
2.3 empower installed directories
[root@BI-greenplum-01 ~] # cd / usr/local/
[root@BI-greenplum-01 local] # chown-R gpadmin:gpadmin / usr/local/greenplum-db*
2.4 compressed package and transferred to other machines
[root@BI-greenplum-01 local] # tar zcvf gp.tar.gz greenplum-db*
[root@BI-greenplum-01 local] # scp gp.tar.gz BI-greenplum-02:/usr/local/
2.5 extract files on other machines
[root@BI-greenplum-02 ~] # cd / usr/local/
[root@BI-greenplum-02 local] # ls
Bin etc games gp.tar.gz include lib lib64 libexec sbin share src
[root@BI-greenplum-02 local] # tar zxvf gp.tar.gz
2.6 configure environment variables on each machine
[root@BI-greenplum-01 local] # su-gpadmin
[gpadmin@BI-greenplum-01 ~] $vi .bash _ profile
Source / usr/local/greenplum-db/greenplum_path.sh
Export MASTER_DATA_DIRECTORY=/app/master/gpseg-1
Export PGPORT=5432
Export PGDATABASE=trjdb
Let the environment variable take effect
[gpadmin@BI-greenplum-01 ~] $source .bash _ profile
2.7 configure Keyless
[gpadmin@BI-greenplum-01 ~] $cat all_hosts_file
BI-greenplum-01
BI-greenplum-02
[gpadmin@BI-greenplum-01] $gpssh-exkeys-f all_hosts_file
[STEP 1 of 5] create local ID and authorize on local host
[STEP 2 of 5] keyscan all hosts and update known_hosts file
[STEP 3 of 5] authorize current user on remote hosts
... Send to BI-greenplum-02
***
* Enter password for BI-greenplum-02:
[STEP 4 of 5] determine common authentication file content
[STEP 5 of 5] copy authentication files to all remote hosts
... Finished key exchange with BI-greenplum-02
[INFO] completed successfully
2.8 create data file (per operation)
[root@BI-greenplum-01 ~] # mkdir / app
[root@BI-greenplum-01] # chown-R gpadmin:gpadmin / app
Operate on MASTER (192.168.10.201)
[gpadmin@BI-greenplum-01] $gpssh-f all_hosts_file
Note: command history unsupported on this machine...
= > mkdir / app/master
[BI-greenplum-02]
[BI-greenplum-01]
= > mkdir-p / app/data/gp1 / app/data/gp2 / app/data/gp3 / app/data/gp4
[BI-greenplum-02]
[BI-greenplum-01]
= > mkdir-p / app/data/gpm1 / app/data/gpm2 / app/data/gpm3 / app/data/gpm4
[BI-greenplum-02]
[BI-greenplum-01]
[gpadmin@BI-greenplum-01 ~] $vi gpinitsystem_config
# FILE NAME: gpinitsystem_config
# Configuration file needed by the gpinitsystem
# # #
# REQUIRED PARAMETERS
# # #
# Name of this Greenplum system enclosed in quotes.
ARRAY_NAME= "EMC Greenplum DW"
# Naming convention for utility-generated data directories.
SEG_PREFIX=gpseg
# Base number by which primary segment port numbers
# are calculated.
PORT_BASE=40000
# File system location (s) where primary segment data directories
# will be created. The number of locations in the list dictate
# the number of primary segments that will get created per
# physical host (if multiple addresses for a host are listed in
# the hostfile, the number of segments will be spread evenly across
# the specified interface addresses).
Declare-a DATA_DIRECTORY= (/ app/data/gp1 / app/data/gp2 / app/data/gp3 / app/data/gp4)
# OS-configured hostname or IP address of the master host.
MASTER_HOSTNAME=BI-greenplum-01
# File system location where the master data directory
# will be created.
MASTER_DIRECTORY=/app/master
# Port number for the master instance.
MASTER_PORT=5432
# Shell utility used to connect to remote hosts.
TRUSTED_SHELL=ssh
# Maximum log file segments between automatic WAL checkpoints.
CHECK_POINT_SEGMENTS=8
# Default server-side character set encoding.
ENCODING=UNICODE
# # #
# OPTIONAL MIRROR PARAMETERS
# # #
# Base number by which mirror segment port numbers
# are calculated.
MIRROR_PORT_BASE=50000
# Base number by which primary file replication port
# numbers are calculated.
REPLICATION_PORT_BASE=41000
# Base number by which mirror file replication port
# numbers are calculated.
MIRROR_REPLICATION_PORT_BASE=51000
# File system location (s) where mirror segment data directories
# will be created. The number of mirror locations must equal the
# number of primary locations as specified in the
# DATA_DIRECTORY parameter.
Declare-a MIRROR_DATA_DIRECTORY= (/ app/data/gpm1 / app/data/gpm2 / app/data/gpm3 / app/data/gpm4)
# # #
# OTHER OPTIONAL PARAMETERS
# # #
# Create a database of this name after initialization.
DATABASE_NAME=trjdb
# Specify the location of the host address file here instead of
# with the the-h option of gpinitsystem
MACHINE_LIST_FILE=/home/gpadmin/seg_hosts_file
Add configuration as data node
[gpadmin@BI-greenplum-01 ~] $vi seg_hosts_file
BI-greenplum-01
BI-greenplum-02
3. Initial configuration
[gpadmin@BI-greenplum-01] $gpinitsystem-c gpinitsystem_config-s BI-greenplum-02
The above instructions have been installed.
[gpadmin@BI-greenplum-01] $psql-d trjdb
Psql (8.2.15)
Type "help" for help.
Trjdb=#
View cluster status
Select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content
Greenplum adds machines and data nodes
1. Add two sets (192.168.10.203, 192.168.10.204)
Modify hosts (each one is the same)
[root@BI-greenplum-01 ~] # cat / etc/hosts
127.0.0.1 localhost localhost.localdomain
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.201 BI-greenplum-01
192.168.10.202 BI-greenplum-02
192.168.10.203 BI-greenplum-03
192.168.10.204 BI-greenplum-04
2. Create users and user groups (additional machines)
[root@BI-greenplum-03] # groupadd-g 530 gpadmin
[root@BI-greenplum-03] # useradd-g 530-u530-m-d / home/gpadmin-s / bin/bash gpadmin
[root@BI-greenplum-03 ~] # passwd gpadmin
Changing password for user gpadmin.
New password:
BAD PASSWORD: it is too simplistic/systematic
BAD PASSWORD: is too simple
Retype new password:
Passwd: all authentication tokens updated successfully.
3. Modify kernel configuration files (added machines)
[root@BI-greenplum-03 ~] # vi / etc/sysctl.conf
# By greenplum
Net.ipv4.ip_forward = 0
Net.ipv4.conf.default.accept_source_route = 0
Kernel.sysrq = 1
Kernel.core_uses_pid = 1
Net.ipv4.tcp_syncookies = 1
Kernel.msgmnb = 65536
Kernel.msgmax = 65536
Kernel.sem = 250 64000 100 512
Kernel.shmmax = 500000000
Kernel.shmmni = 4096
Kernel.shmall = 4000000000
Kernel.sem = 250 64000 100 512
Net.ipv4.tcp_tw_recycle=1
Net.ipv4.tcp_max_syn_backlog=4096
Net.core.netdev_max_backlog=10000
Vm.overcommit_memory=2
Net.ipv4.conf.all.arp_filter = 1
Let kernel parameters take effect
[root@BI-greenplum-03] # sysctl-p
4. Modify the number of file openings
[root@BI-greenplum-03 ~] # vi / etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
5. Install the dependency package
Yum-y install ed openssh-clients gcc gcc-c++ make automake autoconf libtool perl rsync coreutils glib2 lrzsz sysstat e4fsprogs xfsprogs ntp readline-devel zlib zlib-devel unzip
6. Copy the previous compressed package gp.tar.gz to the added node
[root@BI-greenplum-01 local] # scp gp.tar.gz BI-greenplum-03:/usr/local/
[root@BI-greenplum-01 local] # scp gp.tar.gz BI-greenplum-04:/usr/local/
Decompression
[root@BI-greenplum-03 local] # tar zxvf gp.tar.gz
[root@BI-greenplum-04 local] # tar zxvf gp.tar.gz
7. Add node creation directory (on each additional node)
[root@BI-greenplum-03 local] # mkdir / app/master
[root@BI-greenplum-04 local] # mkdir / app/master
[root@BI-greenplum-03 local] # mkdir-p / app/data/gp1 / app/data/gp2 / app/data/gp3 / app/data/gp4
[root@BI-greenplum-04 local] # mkdir-p / app/data/gp1 / app/data/gp2 / app/data/gp3 / app/data/gp4
[root@BI-greenplum-03 local] # mkdir-p / app/data/gpm1 / app/data/gpm2 / app/data/gpm3 / app/data/gpm4
[root@BI-greenplum-04 local] # mkdir-p / app/data/gpm1 / app/data/gpm2 / app/data/gpm3 / app/data/gpm4
[root@BI-greenplum-03 local] # chown-R gpadmin:gpadmin / app
[root@BI-greenplum-04 local] # chown-R gpadmin:gpadmin / app
[root@BI-greenplum-03 local] # chmod-R 700 / app
[root@BI-greenplum-04 local] # chmod-R 700 / app
8. Modify environment variables (add computing point machines)
[root@BI-greenplum-03 local] # su-gpadmin
[gpadmin@BI-greenplum-03 ~] $vi .bash _ profile
Source / usr/local/greenplum-db/greenplum_path.sh
Export MASTER_DATA_DIRECTORY=/app/master/gpseg-1
Export PGPORT=5432
Export PGDATABASE=trjdb
Let the environment variable take effect
[gpadmin@BI-greenplum-03 ~] $source .bash _ profile
9. Key-free operation in BI-greenplum-01
[root@BI-greenplum-01 local] # su-gpadmin
[gpadmin@BI-greenplum-01 ~] $vi all_hosts_file
BI-greenplum-01
BI-greenplum-02
BI-greenplum-03
BI-greenplum-04
[gpadmin@BI-greenplum-01] $gpssh-exkeys-f all_hosts_file
[STEP 1 of 5] create local ID and authorize on local host
... / home/gpadmin/.ssh/id_rsa file exists... Key generation skipped
[STEP 2 of 5] keyscan all hosts and update known_hosts file
[STEP 3 of 5] authorize current user on remote hosts
... Send to BI-greenplum-02
... Send to BI-greenplum-03
***
* Enter password for BI-greenplum-03:
... Send to BI-greenplum-04
[STEP 4 of 5] determine common authentication file content
[STEP 5 of 5] copy authentication files to all remote hosts
... Finished key exchange with BI-greenplum-02
... Finished key exchange with BI-greenplum-03
... Finished key exchange with BI-greenplum-04
[INFO] completed successfully
10. Initialize a new extension (operate on master)
[gpadmin@BI-greenplum-01 ~] $vi hosts_expand
BI-greenplum-03
BI-greenplum-04
Increase according to one's own situation
[gpadmin@BI-greenplum-01] $gpexpand-f hosts_expand
20171208 gpexpand:BI-greenplum-01:gpadmin- 023306 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'
20171208 gpexpand:BI-greenplum-01:gpadmin- 023306 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14 master Greenplum Version 57'
20171208 gpexpand:BI-greenplum-01:gpadmin- 0015 0306 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Querying gpexpand schema for current expansion state
System Expansion is used to add segments to an existing GPDB array.
Gpexpand did not detect a System Expansion that is in progress.
Before initiating a System Expansion, you need to provision and burn-in
The new hardware. Please be sure to run gpcheckperf/gpcheckos to make
Sure the new hardware is working properly.
Please refer to the Admin Guide for more information.
Would you like to initiate a new System Expansion Yy | Nn (default=N):
> y
You must now specify a mirroring strategy for the new hosts. Spread mirroring places
A given hosts mirrored segments each on a separate host. You must be
Adding more hosts than the number of segments per host to use this.
Grouped mirroring places all of a given hosts segments on a single
Mirrored host. You must be adding at least 2 hosts in order to use this.
What type of mirroring strategy would you like?
Spread | grouped (default=grouped):
>
By default, new hosts are configured with the same number of primary
Segments as existing hosts. Optionally, you can increase the number
Of segments per host.
For example, if existing hosts have two primary segments, entering a value
Of 2 will initialize two additional segments on existing hosts, and four
Segments on new hosts. In addition, mirror segments will be added for
These new primary segments if mirroring is enabled.
How many new primary segments per host do you want to add? (default=0):
> 4
Enter new primary data directory 1:
> / app/data/gp1
Enter new primary data directory 2:
> / app/data/gp2
Enter new primary data directory 3:
> / app/data/gp3
Enter new primary data directory 4:
> / app/data/gp4
Enter new mirror data directory 1:
> / app/data/gpm1
Enter new mirror data directory 2:
> / app/data/gpm2
Enter new mirror data directory 3:
> / app/data/gpm3
Enter new mirror data directory 4:
> / app/data/gpm4
Generating configuration file...
20171208 gpexpand:BI-greenplum-01:gpadmin- 0057 Generating input file... 18023306:-Generating input file...
Input configuration files were written to 'gpexpand_inputfile_20171208_005718' and' None'.
Please review the file and make sure that it is correct then re-run
With: gpexpand-I gpexpand_inputfile_20171208_005718-D trjdb
20171208 gpexpand:BI-greenplum-01:gpadmin- 0057 Exiting... 18023306:-Exiting...
A configuration file called gpexpand_inputfile_20171208_005718 is generated, which needs to be modified before the database can be extended through the configuration file.
The original document such as red is needed for preservation.
[gpadmin@BI-greenplum-01 ~] $cat gpexpand_inputfile_20171208_005718
BI-greenplum-03:BI-greenplum-03:40000:/app/data/gp1/gpseg8:19:8:p:41000
BI-greenplum-04:BI-greenplum-04:50000:/app/data/gpm1/gpseg8:31:8:m:51000
BI-greenplum-03:BI-greenplum-03:40001:/app/data/gp2/gpseg9:20:9:p:41001
BI-greenplum-04:BI-greenplum-04:50001:/app/data/gpm2/gpseg9:32:9:m:51001
BI-greenplum-03:BI-greenplum-03:40002:/app/data/gp3/gpseg10:21:10:p:41002
BI-greenplum-04:BI-greenplum-04:50002:/app/data/gpm3/gpseg10:33:10:m:51002
BI-greenplum-03:BI-greenplum-03:40003:/app/data/gp4/gpseg11:22:11:p:41003
BI-greenplum-04:BI-greenplum-04:50003:/app/data/gpm4/gpseg11:34:11:m:51003
BI-greenplum-04:BI-greenplum-04:40000:/app/data/gp1/gpseg12:23:12:p:41000
BI-greenplum-03:BI-greenplum-03:50000:/app/data/gpm1/gpseg12:27:12:m:51000
BI-greenplum-04:BI-greenplum-04:40001:/app/data/gp2/gpseg13:24:13:p:41001
BI-greenplum-03:BI-greenplum-03:50001:/app/data/gpm2/gpseg13:28:13:m:51001
BI-greenplum-04:BI-greenplum-04:40002:/app/data/gp3/gpseg14:25:14:p:41002
BI-greenplum-03:BI-greenplum-03:50002:/app/data/gpm3/gpseg14:29:14:m:51002
BI-greenplum-04:BI-greenplum-04:40003:/app/data/gp4/gpseg15:26:15:p:41003
BI-greenplum-03:BI-greenplum-03:50003:/app/data/gpm4/gpseg15:30:15:m:51003
BI-greenplum-01:BI-greenplum-01:40004:/app/data/gp1/gpseg16:35:16:p:41004
BI-greenplum-02:BI-greenplum-02:50004:/app/data/gpm1/gpseg16:55:16:m:51004
BI-greenplum-01:BI-greenplum-01:40005:/app/data/gp2/gpseg17:36:17:p:41005
BI-greenplum-02:BI-greenplum-02:50005:/app/data/gpm2/gpseg17:56:17:m:51005
BI-greenplum-01:BI-greenplum-01:40006:/app/data/gp3/gpseg18:37:18:p:41006
BI-greenplum-02:BI-greenplum-02:50006:/app/data/gpm3/gpseg18:57:18:m:51006
BI-greenplum-01:BI-greenplum-01:40007:/app/data/gp4/gpseg19:38:19:p:41007
BI-greenplum-02:BI-greenplum-02:50007:/app/data/gpm4/gpseg19:58:19:m:51007
BI-greenplum-02:BI-greenplum-02:40004:/app/data/gp1/gpseg20:39:20:p:41004
BI-greenplum-03:BI-greenplum-03:50004:/app/data/gpm1/gpseg20:59:20:m:51004
BI-greenplum-02:BI-greenplum-02:40005:/app/data/gp2/gpseg21:40:21:p:41005
BI-greenplum-03:BI-greenplum-03:50005:/app/data/gpm2/gpseg21:60:21:m:51005
BI-greenplum-02:BI-greenplum-02:40006:/app/data/gp3/gpseg22:41:22:p:41006
BI-greenplum-03:BI-greenplum-03:50006:/app/data/gpm3/gpseg22:61:22:m:51006
BI-greenplum-02:BI-greenplum-02:40007:/app/data/gp4/gpseg23:42:23:p:41007
BI-greenplum-03:BI-greenplum-03:50007:/app/data/gpm4/gpseg23:62:23:m:51007
BI-greenplum-03:BI-greenplum-03:40004:/app/data/gp1/gpseg24:43:24:p:41004
BI-greenplum-04:BI-greenplum-04:50004:/app/data/gpm1/gpseg24:63:24:m:51004
BI-greenplum-03:BI-greenplum-03:40005:/app/data/gp2/gpseg25:44:25:p:41005
BI-greenplum-04:BI-greenplum-04:50005:/app/data/gpm2/gpseg25:64:25:m:51005
BI-greenplum-03:BI-greenplum-03:40006:/app/data/gp3/gpseg26:45:26:p:41006
BI-greenplum-04:BI-greenplum-04:50006:/app/data/gpm3/gpseg26:65:26:m:51006
BI-greenplum-03:BI-greenplum-03:40007:/app/data/gp4/gpseg27:46:27:p:41007
BI-greenplum-04:BI-greenplum-04:50007:/app/data/gpm4/gpseg27:66:27:m:51007
BI-greenplum-04:BI-greenplum-04:40004:/app/data/gp1/gpseg28:47:28:p:41004
BI-greenplum-01:BI-greenplum-01:50004:/app/data/gpm1/gpseg28:51:28:m:51004
BI-greenplum-04:BI-greenplum-04:40005:/app/data/gp2/gpseg29:48:29:p:41005
BI-greenplum-01:BI-greenplum-01:50005:/app/data/gpm2/gpseg29:52:29:m:51005
BI-greenplum-04:BI-greenplum-04:40006:/app/data/gp3/gpseg30:49:30:p:41006
BI-greenplum-01:BI-greenplum-01:50006:/app/data/gpm3/gpseg30:53:30:m:51006
BI-greenplum-04:BI-greenplum-04:40007:/app/data/gp4/gpseg31:50:31:p:41007
BI-greenplum-01:BI-greenplum-01:50007:/app/data/gpm4/gpseg31:54:31:m:51007
After modification, it is as follows:
Then run the gpexpand script
Gpexpand-I gpexpand_inputfile_20171208_005718-D trjdb
[gpadmin@BI-greenplum-01] $gpexpand-I gpexpand_inputfile_20171208_005718-D trjdb
20171208 local Greenplum Version 01 build 03 gpexpand:BI-greenplum-01:gpadmin- 023572 [INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'
20171208 gpexpand:BI-greenplum-01:gpadmin- 01master Greenplum Version gpexpand:BI-greenplum-01:gpadmin- 023572 [INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14 master Greenplum Version 57'
20171208 Querying gpexpand schema for current expansion state 01v 03V 11v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Querying gpexpand schema for current expansion state
20171208 Readying Greenplum Database for a new expansion 01v 03V 11v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Readying Greenplum Database for a new expansion
20171208 Checking database trjdb for unalterable tables... 01v 03 gpexpand:BI-greenplum-01:gpadmin- 25v 023572 [INFO]:-Checking database trjdb for unalterable tables...
20171208 Checking database postgres for unalterable tables... 01v 03 gpexpand:BI-greenplum-01:gpadmin- 25v 023572 [INFO]:-Checking database postgres for unalterable tables...
20171208 Checking database template1 for unalterable tables... 01v 03 gpexpand:BI-greenplum-01:gpadmin- 25v 023572 [INFO]:-Checking database template1 for unalterable tables...
20171208 Checking database trjdb for tables with unique indexes... 01v 03 gpexpand:BI-greenplum-01:gpadmin- 25v 023572 [INFO]:-Checking database trjdb for tables with unique indexes...
20171208 Checking database postgres for tables with unique indexes... 01v 03 gpexpand:BI-greenplum-01:gpadmin- 25v 023572 [INFO]:-Checking database postgres for tables with unique indexes...
20171208 Checking database template1 for tables with unique indexes... 01v 03 gpexpand:BI-greenplum-01:gpadmin- 25v 023572 [INFO]:-Checking database template1 for tables with unique indexes...
20171208 Syncing Greenplum Database extensions 01v 03 gpexpand:BI-greenplum-01:gpadmin- 25v 023572 [INFO]:-Syncing Greenplum Database extensions
20171208 The packages on BI-greenplum-03 are consistent 01 gpexpand:BI-greenplum-01:gpadmin- 03 gpexpand:BI-greenplum-01:gpadmin- 023572.
20171208 The packages on BI-greenplum-04 are consistent 01 gpexpand:BI-greenplum-01:gpadmin- 03 26 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-The packages on BI-greenplum-04 are consistent.
20171208 Creating segment template 01v 03v 27v 023572 [INFO]:-Creating segment template
20171208 VACUUM FULL on the catalog tables 01v 03v 27v 023572 [INFO]:-VACUUM FULL on the catalog tables
20171208 Starting copy of segment dbid 01 to location 03 28 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Starting copy of segment dbid 1 to location / app/master/gpexpand_12082017_23572
20171208 Copying postgresql.conf from existing segment into template 01v 03R 28 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Copying postgresql.conf from existing segment into template
20171208 Copying pg_hba.conf from existing segment into template 01v 03V 29v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Copying pg_hba.conf from existing segment into template
20171208 Adding new segments into template pg_hba.conf 01v 03V 29v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Adding new segments into template pg_hba.conf
20171208 Creating schema tar file 01v 03V 29v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Creating schema tar file
20171208 Distributing template tar file to new hosts 01v 03V 29v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Distributing template tar file to new hosts
20171208 Configuring new segments 01v 033R 31R 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Configuring new segments (primary)
20171208 Configuring new segments 01V 03R 3R 32 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Configuring new segments (mirror)
20171208 Backing up pg_hba.conf file on original segments 01v 03R 333 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Backing up pg_hba.conf file on original segments
20171208 Copying new pg_hba.conf file to original segments 01v 03R 333 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Copying new pg_hba.conf file to original segments
20171208 Configuring original segments 01v 03R 333 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Configuring original segments
20171208 Cleaning up temporary template files 01v 03R 333 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Cleaning up temporary template files
20171208 Starting Greenplum Database in restricted mode 01v 03R gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Starting Greenplum Database in restricted mode
20171208 Stopping database 01v 03V 42R 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Stopping database
20171208 Checking if Transaction filespace was moved 01v 03v gpexpand:BI-greenplum-01:gpadmin- 55v 023572:-Checking if Transaction filespace was moved
20171208 Checking if Temporary filespace was moved 01v 03v gpexpand:BI-greenplum-01:gpadmin- 55v 023572:-Checking if Temporary filespace was moved
20171208 Configuring new segment filespaces 01v 03v gpexpand:BI-greenplum-01:gpadmin- 55v 023572:-Configuring new segment filespaces
20171208 gpexpand:BI-greenplum-01:gpadmin- 01R 03R 515 572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Cleaning up databases in new segments.
20171208 Starting master in utility mode 01v 03v gpexpand:BI-greenplum-01:gpadmin- 55v 023572:-Starting master in utility mode
20171208 Stopping master in utility mode 01v 03V 56R gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Stopping master in utility mode
20171208 Starting Greenplum Database in restricted mode 01v 04R 03R 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Starting Greenplum Database in restricted mode
20171208 Creating expansion schema 01v 04v gpexpand:BI-greenplum-01:gpadmin- 023572 [INFO]:-Creating expansion schema
20171208 Populating gpexpand.status_detail with data from database trjdb 01v 04v 12v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Populating gpexpand.status_detail with data from database trjdb
20171208 Populating gpexpand.status_detail with data from database postgres 01v 04v 12v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Populating gpexpand.status_detail with data from database postgres
20171208 Populating gpexpand.status_detail with data from database template1 01v 04v 13v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Populating gpexpand.status_detail with data from database template1
20171208 Stopping Greenplum Database 01v 04v 14v 023572 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Stopping Greenplum Database
20171208 Starting Greenplum Database 01v 04v 27v 023572 [INFO]:-Starting Greenplum Database
20171208 Starting new mirror segment synchronization 01lav 04 gpexpand:BI-greenplum-01:gpadmin- 3403572 [INFO]:-Starting new mirror segment synchronization
20171208 gpexpand:BI-greenplum-01:gpadmin- 01lav 044lav 48purl 023572 [INFO]:-*
20171208 gpexpand:BI-greenplum-01:gpadmin- 01R 044V 48V 023572 [INFO]:-Initialization of the system expansion complete.
20171208 To begin table expansion onto the new segments 01v 04L gpexpand:BI-greenplum-01:gpadmin- 023572:-To begin table expansion onto the new segments
20171208 rerun gpexpand 01v 04L gpexpand:BI-greenplum-01:gpadmin- 023572:-rerun gpexpand
20171208 gpexpand:BI-greenplum-01:gpadmin- 01lav 044lav 48purl 023572 [INFO]:-*
20171208 Exiting... 01v 04L gpexpand:BI-greenplum-01:gpadmin- 023572:-Exiting...
The above indicates that the number of computing nodes has been added successfully.
What if the previous step fails?
Start the restricted mode and roll back.
Gpstart-R
Gpexpand-- rollback-D trjdb
Gpstart-a
Then find the problem and move on to the next step until you succeed.
Table redistribution can be done by script
[gpadmin@BI-greenplum-01 ~] $gpexpand-d 60:00:00
20171208 local Greenplum Version local Greenplum Version: 'postgres (Greenplum Database) 4.3.3.1 build 1'
20171208 INFO 01master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.3.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Oct 10 2014 14 master Greenplum Version 57'
20171208 Querying gpexpand schema for current expansion state 01v 09R 09 gpexpand:BI-greenplum-01:gpadmin- [INFO]:-Querying gpexpand schema for current expansion state
20171208 EXPANSION COMPLETED SUCCESSFULLY 01EXPANSION COMPLETED SUCCESSFULLY gpexpand:BI-greenplum-01:gpadmin- [20171208]:-EXPANSION COMPLETED SUCCESSFULLY
20171208 Exiting... 01Exiting... gpexpand:BI-greenplum-01:gpadmin- [20171208]:-Exiting...
Check the node status. Red is a new addition.
[gpadmin@BI-greenplum-01] $psql-d trjdb
Psql (8.2.15)
Type "help" for help.
Trjdb=# select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content
Dbid | content | role | port | hostname | fsname | fselocation
-+-
1 |-1 | p | 5432 | BI-greenplum-01 | pg_system | / app/master/gpseg-1
18 |-1 | m | 5432 | BI-greenplum-02 | pg_system | / app/master/gpseg-1
10 | 0 | m | 50000 | BI-greenplum-02 | pg_system | / app/data/gpm1/gpseg0
2 | 0 | p | 40000 | BI-greenplum-01 | pg_system | / app/data/gp1/gpseg0
3 | 1 | p | 40001 | BI-greenplum-01 | pg_system | / app/data/gp2/gpseg1
11 | 1 | m | 50001 | BI-greenplum-02 | pg_system | / app/data/gpm2/gpseg1
4 | 2 | p | 40002 | BI-greenplum-01 | pg_system | / app/data/gp3/gpseg2
12 | 2 | m | 50002 | BI-greenplum-02 | pg_system | / app/data/gpm3/gpseg2
5 | 3 | p | 40003 | BI-greenplum-01 | pg_system | / app/data/gp4/gpseg3
13 | 3 | m | 50003 | BI-greenplum-02 | pg_system | / app/data/gpm4/gpseg3
6 | 4 | p | 40000 | BI-greenplum-02 | pg_system | / app/data/gp1/gpseg4
14 | 4 | m | 50000 | BI-greenplum-01 | pg_system | / app/data/gpm1/gpseg4
15 | 5 | m | 50001 | BI-greenplum-01 | pg_system | / app/data/gpm2/gpseg5
7 | 5 | p | 40001 | BI-greenplum-02 | pg_system | / app/data/gp2/gpseg5
16 | 6 | m | 50002 | BI-greenplum-01 | pg_system | / app/data/gpm3/gpseg6
8 | 6 | p | 40002 | BI-greenplum-02 | pg_system | / app/data/gp3/gpseg6
17 | 7 | m | 50003 | BI-greenplum-01 | pg_system | / app/data/gpm4/gpseg7
9 | 7 | p | 40003 | BI-greenplum-02 | pg_system | / app/data/gp4/gpseg7
31 | 8 | m | 50000 | BI-greenplum-04 | pg_system | / app/data/gpm1/gpseg8
19 | 8 | p | 40000 | BI-greenplum-03 | pg_system | / app/data/gp1/gpseg8
32 | 9 | m | 50001 | BI-greenplum-04 | pg_system | / app/data/gpm2/gpseg9
20 | 9 | p | 40001 | BI-greenplum-03 | pg_system | / app/data/gp2/gpseg9
33 | 10 | m | 50002 | BI-greenplum-04 | pg_system | / app/data/gpm3/gpseg10
21 | 10 | p | 40002 | BI-greenplum-03 | pg_system | / app/data/gp3/gpseg10
22 | 11 | p | 40003 | BI-greenplum-03 | pg_system | / app/data/gp4/gpseg11
34 | 11 | m | 50003 | BI-greenplum-04 | pg_system | / app/data/gpm4/gpseg11
27 | 12 | m | 50000 | BI-greenplum-03 | pg_system | / app/data/gpm1/gpseg12
23 | 12 | p | 40000 | BI-greenplum-04 | pg_system | / app/data/gp1/gpseg12
28 | 13 | m | 50001 | BI-greenplum-03 | pg_system | / app/data/gpm2/gpseg13
24 | 13 | p | 40001 | BI-greenplum-04 | pg_system | / app/data/gp2/gpseg13
29 | 14 | m | 50002 | BI-greenplum-03 | pg_system | / app/data/gpm3/gpseg14
25 | 14 | p | 40002 | BI-greenplum-04 | pg_system | / app/data/gp3/gpseg14
26 | 15 | p | 40003 | BI-greenplum-04 | pg_system | / app/data/gp4/gpseg15
30 | 15 | m | 50003 | BI-greenplum-03 | pg_system | / app/data/gpm4/gpseg15
(34 rows)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.