In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Today, the editor will bring you an article on the installation of postgresql in Centos. The editor thinks it is very practical, so I will share it for you as a reference. Let's follow the editor and have a look.
This article is to share with you the linux installation win7 system installation of the detailed installation tutorial, I believe that most people do not know how to install, in order to let you learn, to give you a summary of the following content, do not say much, let's read on.
Installation summary
Environment centos 7.4 linux COPY 5x86x64, source code installation, the installation process is nothing to say, mainly to configure the parameters of the server; since it is a server, we still have to be more rigorous, and the linux COPY speed is also very fast, manual work
Download media: wget https://ftp.postgresql.org/pub/source/v11.4/postgresql-11.4.tar.gz
1 install the software package yum install net-tools-yyum install sysstat-yyum install iotop libXp redhat-lsb gcc gdb-yyum install xorg-x11-xauth-yyum install-y vim lrzsz tree wget gcc gcc-c++ readline-devel hwloc smartmontools--dbyum install-y readline readline-devel openssl openssl-devel zlib zlib-devel numactl
2 enable large pages:
DB:
/ etc/default/grubnet.ifnames=0 biosdevname=0 default_hugepagesz=2M hugepagesz=2M hugepages=81920 transparent_hugepage=never execute effective command
Grub2-mkconfig-o / boot/grub2/grub.cfg modifies the number of large pages and takes effect without restarting:
Sysctl-w vm.nr_hugepages=81920 review:
Cat / proc/cmdlinecat / sys/kernel/mm/transparent_hugepage/ enabled [root @ kbj-db-1 ~] # grep-I hugepage/ proc/meminfoAnonHugePages: 0 kBHugePages_Total: 81920HugePages_Free: 44703HugePages_Rsvd: 43036HugePages_Surp: 0Hugepagesize: 2048 kB
3 change the kernel parameters net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048576fs.aio-max-nr = 1048576fs.file-max = 6815744fs.aio-max-nr = 1048576kernel.panic_on_oops=1net.core.somaxconn = 8192net.ipv4.tcp_keepalive_time = 600 # default 7200net.ipv4.ip_local_port_range = 10000 65000 # default 32768 to 61000 net.ipv4.tcp_max_syn_backlog = 8192 # default 1024net.ipv4.tcp_max_tw_buckets = 5000 # default 65535#vm.nr_hugepages = 81920 # Db A large page kernel.sem = 4096 128000 64 512vm.swapniess=10 with 81920room2m enabled officially provides a script: #! / bin/bashpid= `head-1$ PGDATA/ postmaster.pid`echo "Pid: $pid" peak= `grep ^ VmPeak / proc/$pid/status | awk'{print $2} '`echo "VmPeak: $peak kB" hps= `grep ^ Hugepagesize / proc/meminfo | awk'{print $2} '`echo "Hugepagesize: $hps kB" hp=$ ((peak/hps)) echo Set Huge Pages: $hp4 change user limit vi / etc/security/limits.confpostgres soft nproc 16384postgres hard nproc 16384postgres soft nofile 65536postgres hard nofile 65536
5 modify environment variable (HISTORY)
Vi / etc/profile
Export.UTF8
Export HISTTIMEFORMAT= "F T"
Export HISTSIZE=12000
Ulimit-SHn 65536
6 change the database server disk IO algorithm
Disk IO scheduling algorithm, which can be adjusted according to the actual situation. Cfq/noop is recommended.
Echo noop > / sys/block/sdb/queue/schedulercat / sys/block/sdb/queue/scheduler [noop] deadline cfq [root@kbj-db-1 ssd] # time dd if=/dev/zero of=/ssd/test.dmp bs=8192 count= 12800000 ^ C recorded 4873412x0 read-in recorded 4873412800 write out 39922991104 bytes (40 GB) replicated, 35.9298 seconds 1.1 GB/ seconds 7 configure users and directories groupadd-g 106 ssl-certgroupadd-g 107 postgresuseradd-g postgres-G ssl-cert-u 104 postgreschown postgres:postgres-R / ssdsu-postgres [postgres@kbj-db-1 ssd] $mkdir / ssd/database/ [postgres@kbj-db-1 ssd] $mkdir / ssd/database/pg114data [postgres@kbj-db-1 ssd] $mkdir / ssd/database/114arch [postgres@kbj-db-1 ssd] $mkdir / ssd/database/pg114home8 Settings postgresql Environment variable export PGDATA=/ssd/database/pg114dataexport PGARCH=/ssd/database/114archexport PGHOME=/ssd/database/pg114homeexport LD_LIBRARY_PATH=/ssd/database/pg114home/libexport PATH=$PGHOME/bin:$PATH PATH=$PATH:$HOME/.local/bin:$HOME/bin9 installation
. / configure is selected according to the actual situation
Tar-xzvf postgresql-11.4.tar.gzcd postgresql-11.4/./configure-- prefix=/ssd/database/pg114home-- with-python-- with-ossp-uuid-- with-libxml-- with-openssl-- enable-dtrace-- enable-debugcd / ssd/postgresql-11.4/contrib/make make install / ssd/postgresql-11.4/contrib/file_fdwcd $PGHOME/bin/. / initdb-D $PGDATA# connection database psql-p5432-Upostgres-d postgres# installation plug-in create extension pg_buffercache Create extension pg_stat_statements;CREATE EXTENSION file_fdw;create extension pgrowlocks;CREATE SERVER file_fdw_server FOREIGN DATA WRAPPER file_fdw; select current_database (), * from pg_extension select * from pg_available_extensions where name like'% uuid%' -- the extension checks select * from pg_available_extensions where name in ('fuzzystrmatch','pg_visibility','tablefunc','amcheck','intarray','tsm_system_time','pgrowlocks','tcn','dict_int','unaccent','btree_gin','dict_xsyn','intagg','insert_username','dblink','lo','uuid-ossp','adminpack','bloom','postgres_fdw','pageinspect','pg_freespacemap' 'pg_prewarm','pgcrypto','pg_buffercache','file_fdw','btree_gist','xml2','citext','pg_stat_statements','refint','pgstattuple','timetravel','hstore','moddatetime','isn','cube','autoinc','pg_trgm','ltree','plpgsql','seg','tsm_system_rows' 'earthdistance') explain: specify installation parameters:-- with-wal-segsize=SEGSIZE sets the size of the WAL segment In M bytes. This is the size of each individual file in the WAL log. It is useful to adjust this value to control the granularity of shipping WAL logs. The default size is 16 m bytes. This value must be a power of 2 and between 1 and 1024 (M bytes). Note that changing this value requires an initdb. -- with-segsize=SEGSIZE sets the segment size in G bytes. Large tables are broken down into multiple operating system files, each of which is equal to the segment size. This avoids problems associated with operating system restrictions on file size. The default segment size (1G bytes) is safe on all supported platforms. If your operating system has "largefile" support (most of which are supported today), you can use a larger segment size. This can help with the number of file descriptors consumed when using very large tables. But be careful not to choose a value that exceeds the size supported by the platform and file system you will use. Other tools you may want to use (such as tar) can also set limits on the available file sizes. If not absolutely necessary, we recommend that this value be a power of 2. Note that changing this value requires an initdb. -- with-blocksize=BLOCKSIZE sets the block size in K bytes. This is the unit of storage in the table and Ibank O. The default value (8K bytes) is suitable for most cases, but other values may be more useful in special cases. This value must be a power of 2 and between 1 and 32 (K bytes). Note that changing this value requires an initdb. -- with-wal-blocksize=BLOCKSIZE sets the WAL block size in K bytes. This is the unit of WAL log storage and Ibank O. The default value (8K bytes) is suitable for most cases, but other values are more useful in special cases. This value must be a power of 2 and between 1 and 64 (K bytes). Note that changing this value requires an initdb. -- with-python makes PL/Python server-side programming language. Full reference: http://www.postgres.cn/v2/document
10 change log file default configuration 1 # enable DDL logging log_statement=ddl2 log for 91 days, log_line_prefix ='% m% p% u% d% r% a 'log_rotation_age = 91dlog_rotation_size = 20MBlog_filename =' postgresql-%Y-%m-%d_%H%M%S.log'3 enable file permissions Enable long-term SQL records auto_explain.log_min_duration=10000archive_mode = onarchive_command ='cp% p / home/postgres/arch/%f'wal_level = replica for EFK
These are the details of Centos's installation of postgresql. Have you gained anything after reading it? If you want to know more about it, you are welcome to follow the industry information!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.