In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
My friend said that the table data on a production line has been misoperated in large quantities and can be restored.
Do not run, as long as the data database, whether MySQL,PG,Oracle, etc., as long as the log and backup, can be restored to any point of failure.
It's just a different method of operation.
Backup data can be restored based on the point of failure on other machines
Build a PG environment in other environments:
Compile and install the parameters are particularly important, otherwise can not start, it is recommended to separate the data and install the software directory, which only requires the tar package.
1) View the basic configuration of the online environment:
Show all; can see the blocksize and wal-segsize of the production environment
2) compile and install PG software in online environment
Yum install-y perl-ExtUtils-Embed readline-devel zlib-devel pam-devel libxml2-devel libxslt-devel openldap-devel python-devel gcc-c++ openssl-devel cmake
/ configure-- prefix=/opt/postgres-- with-pgport=5432-- with-python-- with-libxml-- with-wal-segsize=16-- with-blocksize=8
Make & & make install
Plug-in installation:
Cd contrib
Make & & make install
3) restore the database based on the point in time:
Stop the database
# pg_stop
Using backup to restore
# rm-rf data
Tar xvf pgdata.tar
Use pg_waldump to find the point of the problem, and then modify the recovery.conf to restore to the specified point in time.
Copy the recovery.conf file and modify it to restore at a specified point in time
# cp $PG_HOME/share/recovery.conf.sample / home/postgres/data
# vi / opt/postgres/data/recovery.conf
-- add new content, specify the recovery file and path, and see the above description for% fjre% p.
Restore_command ='cp / opt/postgres/archive/%f% p'
Recovery_target_time = '2018-12-29 10-24-12-29-12-12-29 10-24-12-29 10-24-12-29 10-24-12-29 10-24-12-29 10-24-12-29
After recovering the data, the data of dump table can be sent to the production line.
99) this job is recommended to be built by hand
In particular, to determine the point of failure, and then manual recovery, multi-faceted confirmation, and finally rest assured to put the data on the production line.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.