In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article focuses on "analyzing DBA in SQL". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "analyzing DBA in SQL".
1. Program backup
Scenario: once the host of the program fails and the program needs to be redeployed in the new environment, the role of program backup is realized.
In fact, the underlying database connected to my program here is Oracle RAC architecture, and you can deploy a set of programs directly in another node of RAC. Because the files used by the previous program are under / home/oracle by default, there are many other files that have nothing to do with the program, which is quite confusing. Now consider sorting the program into a unified directory for overall packaging, which is convenient for backup and quick redeployment in case of failure.
I uniformly put the directory: / home/oracle/baby here, and pack and backup the program according to the current version number, and finally copy the backup package to NAS for storage.
1.1 Unified place directory: / home/oracle/baby
[oracle@jystdrac2 baby] $pwd/home/oracle/baby [oracle@jystdrac2 baby] $ls-lrthtotal 76K oracle oinstall 36 Dec 22 09:47 d1.sql RW Dec 22 09:47 oracle oinstall 71 Dec 22 09:47 Sql RW Dec 22 09:47 i2.sql Dec 22 09:47 i2.sql Mustang RW Mustang-1 oracle oinstall 59 Dec 22 09:47 u1.sql Mustang RW Mustang r Mustang-1 oracle Oinstall 199sql Dec 22 09:47 v1.sql Dec 22 09:47 v2.sql Dec 22 09:47 v2.sql Dec 22 09:47 v_estimate.sql-rwxr-xr-x 1 oracle oinstall 396 Dec 22 09:47 v3.sql Dec 22 09:47 v4.sql Dec 22 09:47 v_estimate.sql-rwxr-xr-x 1 oracle oinstall 302 Dec 22 09:54 -x 1 oracle oinstall 296 Dec 22 09:55 baby_insert.sh-rwxr-xr-x 1 oracle oinstall 335 Dec 22 09:55 baby_insert_diy.sh-rwxr-xr-x 1 oracle oinstall 545 Dec 22 09:56 baby_help.sh-rwxr-xr-x 1 oracle oinstall 305 Dec 22 09:57 baby_update.sh-rwxr-xr-x 1 oracle oinstall 293 Dec 22 09:57 baby_view.sh-rwxr-xr-x 1 oracle oinstall 252 Dec 22 09:58 baby_view_diy.sh -rw-r--r-- 1 oracle oinstall 244 Dec 22 13:30 bash_profile-rw-r--r-- 1 oracle oinstall 273 Dec 26 09:10 backup_exp_t_baby.sh-rw-r--r-- 1 oracle oinstall 154 Dec 26 09:53 readme [oracle@jystdrac2 baby] $cd.
1.2 package and backup the program according to the current version number
[oracle@jystdrac2 ~] $tar-zcvf baby_v2.02.tar.gz baby/baby/baby/readmebaby/u1.sqlbaby/v4.sqlbaby/baby_view_diy.shbaby/d1.sqlbaby/v3.sqlbaby/baby_update.shbaby/v2.sqlbaby/v_estimate.sqlbaby/i1.sqlbaby/bash_profilebaby/baby_insert_diy.shbaby/baby_insert.shbaby/i2.sqlbaby/v1.sqlbaby/baby_help.shbaby/baby_view.shbaby/baby_delete.shbaby/backup_exp_t _ baby.sh [oracle@jystdrac2 ~] $ls-lrth baby_v2.02.tar.gz-rw-r--r-- 1 501 1000 1.9K Dec 26 11:46 baby_v2.02.tar.gz
1.3 Last copy of the backup package to NAS for retention
[oracle@jystdrac2] $cp baby_v2.02.tar.gz / public/backup/2. Data backup
Scenario: the above program has been backed up, but in the event of a failure, it is not enough for us to restore the program, but also need the previously generated business data. So we also need a backup of the business data.
You can use exp/expdp regular logical backup, because the amount of data here is very small, so directly use a simpler exp backup.
For example, use exp to back up the data of the current table t_baby at 12:00 every day:
Set up crontab scheduled tasks:
[oracle@jystdrac2] $crontab-l012 * / bin/sh / home/oracle/baby/backup_exp_t_baby.sh
Exp backup script:
[oracle@jystdrac2 ~] $cat / home/oracle/baby/backup_exp_t_baby.shbackupdate= `date +% Y% m% d`export ORACLE_SID=demo2export ORACLE_BASE=/opt/app/oracleexport ORACLE_HOME=/opt/app/oracle/product/11.2.0/dbhome_1export PATH=$PATH:$ORACLE_HOME/binexp test/test tables=t_baby file=/public/backup/t_baby_$backupdate.dmp log=/public/backup/t_baby_$backupdate.log
The backed-up file looks like this:
[oracle@jystdrac2 backup] $ls-lrth tonal babyweights Dec RW-1 501 1000 626 Dec 26 12:00 tantalum babyweights 20191226.Log Dec RW-1 501 1000 16K Dec 26 12:00 t_baby_20191226.dmp3. Real-time data synchronization
Scenario: if there is only a scheduled logical backup of the above steps, it will not be able to satisfy complete data recovery.
For example, a backup was made at 12:00 at noon today, but a failure occurred at 18:00 in the evening, resulting in data loss. The data can only be restored to 12:00 today through a logical backup, and data between 12:00 and 18:00 will be lost.
What if I use physical RMAN backup? In fact, there is also such a problem, because log archiving is not real-time, if the failure is unrecoverable, the online redo log is also lost, RMAN is not fully restored to the most recent archived log, and there is also a risk of losing some data.
What are we going to do? How to synchronize data to another environment in real time? Two mainstream solutions can be thought of:
1) Real-time synchronization of database DG
2) data table OGG synchronization
Database DG real-time synchronization is a physical way, data table OGG synchronization is a logical way.
In general, if the two solutions can only be selected for a while, we strongly recommend that customers choose real-time synchronization in the physical way, because the problems encountered in the logical way are much higher than those in the physical way.
In my scenario, the amount of data is so small that you can choose both.
As for the part of building DG and OGG environment, I will not expand it in detail here. If you have any questions, please refer to the previous article:
Simulating production and building Standby RAC experimental environment (11.2.0.4 DG)
OGG Learning Note 02-one-way replication configuration exampl
4. Known problem solving
After the program for calculating breast-feeding intervals has been in use for some time, some problems have been found to be solved urgently:
4.1 the system time is not accurate
After a few days of running the system, there will be a difference between the operating system time and the real time by a few minutes, which is temporarily solved by periodically synchronizing Ali Cloud's NTP server.
-- use ntpdate command to synchronize with Aliyun time server (ntp2.aliyun.com) [root@jystdrac1 ~] # dateSun Dec 22 08:48:51 CST 2019 [root@jystdrac1 ~] # ntpdate ntp2.aliyun.com22 Dec 08:52:31 ntpdate [24481]: step time server 203.107.6.88 offset 206.232030 sec [root@jystdrac1 ~] # dateSun Dec 22 08:52:35 CST 20119 using crontab timing, synchronize with Aliyun time server every hour The synchronization log is appended to the / tmp/ntpdate.log log file crontab-l0 * ntpdate ntp2.aliyun.com > > / tmp/ntpdate.log
Of course, you can actually set the NTP fine-tuning (- x) mode here to ensure that the stability of RAC is not affected by its adjustments.
4.2 data consistency issues
This can also be said to be the bug of programming.
Phenomenon: the underlying database connected to the current program is a single instance, or it always runs on the same node of RAC, there will be no problem; but if you cross-run insert data on two nodes of RAC, the sequence will have problems and the calculation results will be corrupted.
First, it is called problem solving of sequence in RAC environment:
For example, when a record is inserted in node 1, the ID is 235, and then in node 2, the ID is 192.
[oracle@jystdrac2] $iInsert a row using current time:1 row created.Commit complete.View Today's Result: ID FEED_TIME L LAG (min) LAG (h)-192 12-26 18:21 N 5689 94.82 227 12- 26 02:22 N 225 3.75 228 12-26 04:48 N 146 2.43 229 12-26 07:31 N 164 2.73 230 12-26 10:02 N 151 2.51 231 12-26 11:49 N 107 1.79 232 12-26 14:10 N 141 2.34 233 12-26 17:38 N 208 3.47 234 12-26 18:18 N 41. 68 235 12-26 18:19 N 0. 0110 rows selected.
It can be seen that the record ID value inserted after node 2 is small, resulting in a corruption in the interval calculation of the program itself, which is obviously problematic.
In fact, the problem is also very obvious. Instance 1 and instance 2 obtain the sequence of S1 is not contiguous. They are queried on two instances respectively:
-- instance 1:test@DEMO > select s1.nextval from dual; NEXTVAL- 239 2:test@DEMO-instance 2:test@DEMO > select s1.nextval from dual; NEXTVAL- 193
Query the creation statement of sequence:
Test@DEMO > select dbms_metadata.get_ddl ('SEQUENCE','S1') from dual DBMS_METADATA.GET_DDL ('SEQUENCE','S1')-CREATE SEQUENCE "TEST". "S1" MINVALUE 1 MAXVALUE 99999999999999999999999999999999999999 INCREMENT BY 1 START WITH 241 CACHE 20 NOORDER NOCYCLE
You can see that the sequence defaults to NOORDER. If it is set to ORDER, and the nextval of the sequence is repeatedly cross-read on the two instances, the sequence values are guaranteed to be sequential, and the original situation will not occur again.
So the solution is to rebuild sequence S1 and change it to ORDER.
Drop SEQUENCE S1 create SEQUENCE S1 MINVALUE 1 MAXVALUE 999999999999999999999999999999999999999999 INCREMENT BY 1 START WITH 261 CACHE 20 ORDER NOCYCLE
Verify again (select s1.nextval from dual;) to confirm that the sequence is ordered at this time:
-- instance 1:test@DEMO > select s1.nextval from dual; NEXTVAL- 261muri-instance 2:test@DEMO > select s1.nextval from dual; NEXTVAL- 262
However, it is also important to note that if the sequence is changed to ORDER, it is likely to cause serious performance problems when the actual business pressure is high, which is probably why sequence created without any parameters is NOORDER by default.
At this point, I believe you have a deeper understanding of "analyzing DBA in SQL". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.