Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Greenplum docker installation test environment

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Download docker and pull the docker image to create an image

Yum-y install docker

Systemctl start dokcer

Docker pull centos:6

Docker run-- privileged-dti-p 65000 name gptest1 centos:6 bash

Docker run-privileged-dti-name gptest2 centos:6 bash

Docker run-privileged-dti-name gptest3 centos:6 bash

Docker run-privileged-dti-name gptest4 centos:6 bash

-- privileges docker has root permission

2. Download the dependency package and start ssh, and configure time synchronization

Ssh is not started by default in docker. In order to facilitate the interconnection between nodes, start the ssh in each node of docker and create the relevant authentication key.

Yum install-y net-tools which openssh-clients openssh-server less zip

Unzip iproute.x86_64 vim ntp ed

Yum install update updates the system kernel

Ssh-keygen-t rsa-f / etc/ssh/ssh_host_rsa_key

Ssh-keygen-t dsa-f / etc/ssh/ssh_host_dsa_key

/ usr/sbin/sshd

Ntpdate ntp1.aliyun.com

Configure host domain name mapping relationship

172.17.0.2 dw-greenplum-1 mdw

172.17.0.3 dw-greenplum-2 sdw1

172.17.0.4 dw-greenplum-3 sdw2

172.17.0.4 dw-greenplum-4 sdw3

At the same time, modify the / etc/sysconfig/network file in all nodes to keep the hostname consistent

Modify the name of / etc/hostname

[root@mdw /] # cat / etc/hostname

Mdw

[root@mdw /] # cat / etc/sysconfig/network

NETWORKING=yes

HOSTNAME=mdw

Modify the limit on the number of file openings on each node

Cat / etc/security/limits.confsoft nofile 65536hard nofile 65536soft nproc 131072hard nproc 131072

[root@mdw /] # cat / etc/security/limits.d/90-nproc.confsoft nproc 131072

Root soft nproc unlimited

Modify kernel parameters

Sysctl-w kernel.sem= "64000 50 150"

Cat / proc/sys/kernel/sem

Turn off the firewall and selinux of all nodes

Service iptables stop

Chkconfig iptables off

Setenforce 0

Download the greenplum installation package

You need to register an account. You can also pull the existing binary package from the company's machine.

Https://network.pivotal.io/products/pivotal-gpdb

7. All nodes create users and user groups for greenplum

Groupadd-g 530 gpadmin

Useradd-g 530u 530m-d / home/gpadmin-s / bin/bash gpadmin

Chown-R gpadmin:gpadmin / home/gpadmin

Echo 123456 | passwd gpadmin-- stdin

Install greenplum on master

Su-gpadmin

Unzip greenplum-db-4.3.14.1-rhel5-x86_64.zip

Sh / greenplum-db-4.3.14.1-rhel5-x86_64.bin

The default installation directory needs to be modified during installation, enter / home/gpadmin/greenplum-db

To facilitate the installation of clusters, greenplum provides instructions for batch operations by creating configuration files to use batch commands

Create a file

[gpadmin@mdw ~] $cat conf/hostlist

Mdw

Sdw1

Sdw2

Sdw3

[gpadmin@mdw ~] $cat conf/seg_hosts

Sdw1

Sdw2

Set environment variables to get through all nodes

Source greenplum-db/greenplum_path.sh

[gpadmin@mdw ~] $gpssh-exkeys-f / home/gpadmin/conf/hostlist / / transfer key

[STEP 1 of 5] create local ID and authorize on local host

... / home/gpadmin/.ssh/id_rsa file exists... Key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts

... Send to sdw1

... Send to sdw2

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts

... Finished key exchange with sdw1

... Finished key exchange with sdw2

... Finished key exchange with sdw3

[INFO] completed successfully

Be sure to use the gpadmin identity when using the gpssh-exkeys command, as it will be in / home/gpadmin/.ssh

Generate a password-free login key for ssh in the

[gpadmin@mdw] $gpssh-f / home/gpadmin/conf/hostlist

Note: command history unsupported on this machine...

= > pwd

[mdw] / home/gpadmin

[sdw2] / home/gpadmin

[sdw1] / home/gpadmin

[sdw3] / home/gpadmin

Distribute all installation packages to each value node

Tar cf greenplum.tar greenplum-db/

Copy the file to each machine

Gpscp-f / home/gpadmin/conf/hostlist greenplum.tar =: / home/gpadmin/

Batch decompression

Tar-xf greenplum.tar

In this way, all the nodes are installed.

Initialize create data directory

[gpadmin@mdw conf] $gpssh-f hostlist

= > mkdir gpdata

[mdw]

[sdw2]

[sdw1]

[sdw3]

= > cd gpdata

[mdw]

[sdw2]

[sdw1]

[sdw3]

= > mkdir gpmaster gpdatap1 gpdatap2 gpdatam1 gpdatam2

[mdw]

[sdw2]

[sdw1]

[sdw3]

= > exit

Configure environment variables on each node

Cat .bash _ profile

Source / home/gpadmin/greenplum-db/greenplum_path.sh

Export MASTER_DATA_DIRECTORY=/home/gpadmin/gpdata/gpmaster/gpseg-1

Export PGPORT=6500

Export PGDATABASE=postgres

Source .bash _ profile

Configuration initialization file

[gpadmin@mdw conf] $cat gpinitsystem_config

ARRAY_NAME= "Greenplum"

MACHINE_LIST_FILE=/home/gpadmin/conf/seg_hosts

Name prefix of Segment

SEG_PREFIX=gpseg

Port number starting with Primary Segment

PORT_BASE=33000

Specify the data directory for Primary Segment

Declare-a DATA_DIRECTORY= (/ home/gpadmin/gpdata/gpdatap1 / home/gpadmin/gpdata/gpdatap2)

The Hostname of the machine where the Master resides

MASTER_HOSTNAME=mdw

Specify the data directory for Master

MASTER_DIRECTORY=/home/gpadmin/gpdata/gpmaster

Port of Master

MASTER_PORT=6500

Specify the version of Bash

TRUSTED_SHELL=ssh

Port number starting with Mirror Segment

CHECK_POINT_SEGMENTS=256

MIRROR_PORT_BASE=43000

The starting port number of Primary Segment master / slave synchronization

REPLICATION_PORT_BASE=34000

The starting port number of Mirror Segment master / slave synchronization

MIRROR_REPLICATION_PORT_BASE=44000

Mirror Segment's data directory

Declare-a MIRROR_DATA_DIRECTORY= (/ home/gpadmin/gpdata/gpdatam1 / home/gpadmin/gpdata/gpdatam2)

Initialize the database

Gpinitsystem-c gpinitsystem_config-h hostlist-s sdw3-S

/ / initialization is complete if no error is reported. If an error is reported, you need to modify the configuration and reinitialize it according to the log prompt.

[gpadmin@mdw conf] $psql

Psql (8.2.15)

Type "help" for help.

Postgres=# select version ()

Version

PostgreSQL 8.2.15 (Greenplum Database 4.3.10.0 build commit: f413ff3b006655f14b6b9aa217495ec94da5c96c) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compi

Led on Oct 21 2016 19:36:26

(1 row)

Error screenshots, logs and screenshots

Some of the nodes are up and some are not. There are not enough host resources.

2019-10-28 21 XX000 20 CST,p6788,th-1469757664,0,seg-1 25.520556, "FATAL", "XX000", "could not create semaphores: No space left on device (pg_sema.c:132)", "Failed system call was semget (43001004, 17, 03600).", "This error does not mean that you have run out of disk space.

It occurs when either the system limit for the maximum number of semaphore sets (SEMMNI), or the system wide maximum number of semaphores (SEMMNS), would be exceeded. You need to raise the respective kernel parameter. Alternatively, reduce PostgreSQL's consumption of semaphores by reducing its max_connections parameter (currently 750).

The PostgreSQL documentation contains more information about configuring your system for PostgreSQL. "," InternalIpcSemaphoreCreate "," pg_sema.c ", 132 0xb0a80e postgres errstart + 0x4de

Adjust the configuration file

Vim / home/gpadmin/gpdata/gpdatam1/gpseg6/postgresql.conf

/ / each machine has 4 segments to be modified

Max_connections = 200 / / the original value is 750, adjusted to 200

Shared_buffers = 500MB / / the original value is adjusted from 275 to 500

Gpstart-a restart Startup Cluster

Gpstop-u reload the configuration file

# Common operation commands

Gpstate: view database cluster status, for example: gpstate-a

Gpstart: start the database cluster, for example: gpstart-a

Gpstop: shut down the database cluster, for example: gpstop-a-M fast

Gpssh: execute the shell command remotely, for example: gpssh-f hosts-e 'date'

Deployment reference blog post

Https://www.cnblogs.com/dap570/archive/2015/03/21/greenplum_4node_install.html

Https://my.oschina.net/u/876354/blog/1606419

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report