Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Analysis of OLR and socket files in Oracle cluster

2025-03-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "OLR and socket file analysis in Oracle cluster". In daily operation, I believe that many people have doubts about OLR and socket file analysis in Oracle cluster. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "OLR and socket file analysis in Oracle cluster". Next, please follow the editor to study!

| | OLR |

The information recorded in the OLR file is the resource definition information needed by the ohasd daemon to start the cluster initialization resources. When the cluster starts, ohasd will get the location of the olr file from the / etc/oracle/olr.loc file.

[root@node1] # ls-ltr / etc/oracletotal 2248drwxr-xr-x. 3 root oinstall 4096 Nov 21 01:38 scls_scrdrwxrwxr-x. 5 root oinstall 4096 Nov 21 01:38 oprocd-rws--x---. 1 root oinstall 2279833 Nov 21 01:38 setasmgid-rw-r--r--. 1 root root 0 Nov 21 01:38 olr.loc.orig-rw-r--r--. 1 root oinstall 81 Nov 21 01:38 olr.loc-rw-r--r--. 1 root root 0 Nov 21 01:38 ocr.loc.orig-rw-r--r--. 1 root oinstall 40 Nov 21 01:38 ocr.locdrwxrwx---. 2 root oinstall 4096 Nov 21 01:44 lastgasp [root@node1 ~] # cat / etc/oracle/olr.locolrconfig_loc=/u01/app/11.2.0/grid/cdata/node1.olrcrs_home=/u01/app/11.2.0/ grid [root @ node1 ~] # ls-ltr / u01 hash append to 11.2.0 GreyDue Cdata Greater Node1.olr Meryr RWLY. 1 root oinstall 272756736 Jan 8 21:40 / u01/app/11.2.0/grid/cdata/node1.olr [root@node1 ~] #

For olr files, each node has its own olr file, and the default location is $GRID_HOME/cdata/.olr. We also mentioned above that the path of the olr configuration file is recorded in the / etc/oracle/olr.loc file, and the location of the olr configuration file can also be obtained using the ocrcheck command. The ocrcheck command also verifies the logical integrity of the ocr/olr:

[root@node1 ~] # ocrcheck-localStatus of Oracle Local Registry is as follows: Version: 3 Total space (kbytes): 262120 Used space (kbytes): 2676 Available space (kbytes): 259444 ID: 810831447 Device/File Name: / u01/app/11.2.0/grid/ Cdata/node1.olr Device/File integrity check succeeded Local registry integrity check succeeded Logical corruption check succeeded [root@node1 ~] # ocrcheck-local-configOracle Local Registry configuration is: Device/File Name: / u01/app/11.2.0/grid/cdata/node1.olr [root@node1 ~] #

Ocrcheck verifies integrity by generating libocr11.so at installation time and checking ocr/olr according to the content of libocr11.so.

| | dump OLR file |

The olr file is saved in binary mode. We can use the ocrdump tool provided by oracle to dump the olr file and generate a text mode file, which is convenient for us to understand the contents of the olr file.

[root@node1] # ocrdump-local [root@node1] # ls-ltrtotal 284drwxr-xr-x. 2 root root 4096 Jan 10 2018 tmpdrwxr-xr-x. 2 root root 4096 Jan 10 2018 scripts...-rw-r--r--. 1 root root 10257 Nov 20 09:20 install.log.syslog-rw-r--r--. 1 root root 52196 Nov 20 09:23 install.log-rw-. 1 root root 1717 Nov 20 09:23 anaconda-ks.cfg-rw-. 1 root root 179399 Jan 8 22:05 OCRDUMPFILE [root@node1 ~] # [grid@rac1 tmp] $cat OCRDUMPFILE 05 03:50:13/u01/app/11.2.0/grid/bin/ocrdump.bin 2018 03:50:13/u01/app/11.2.0/grid/bin/ocrdump.bin-local [SYSTEM] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: root, GROUP_NAME: root} [SYSTEM.ORA_CRS_HOME] ORATEXT: / u01/app/11.2.0/gridSECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: root, GROUP_NAME: root} # # GI_HOME Information [SYSTEM.WALLET] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_CREATE_SUB_KEY, OTHER_PERMISSION: PROCR_CREATE_SUB_KEY, USER_NAME: root, GROUP_NAME: root}. [SYSTEM.version.activeversion] ORATEXT: 11.2.0.4.0SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: root, GROUP_NAME: root} # # Cluster version Information [SYSTEM.GPnP] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_NONE, OTHER_PERMISSION: PROCR_NONE, USER_NAME: grid GROUP_NAME: oinstall} # # Cluster initialization resource GPnP definition information [SYSTEM.GPnP.profiles] BYTESTREAM (16): SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_NONE, OTHER_PERMISSION: PROCR_NONE, USER_NAME: grid, GROUP_NAME: oinstall} [SYSTEM.GPnP.profiles.peer] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_NONE, OTHER_PERMISSION: PROCR_NONE, USER_NAME: grid, GROUP_NAME: oinstall} … [SYSTEM.network] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: grid, GROUP_NAME: oinstall} [SYSTEM.network.haip] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: grid, GROUP_NAME: oinstall} [SYSTEM.network.haip.group] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: grid, GROUP_NAME: oinstall} [SYSTEM.network.haip.group.cluster_interconnect] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: grid, GROUP_NAME: oinstall} [SYSTEM.network.haip.group.cluster_interconnect.interface] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ USER_NAME: grid, GROUP_NAME: oinstall} # # Cluster initialization resources HAIP information [SYSTEM.OCR] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: root, GROUP_NAME: root} # # OCR information [SYSTEM.OCR.BACKUP] UNDEF: SECURITY: {USER_PERMISSION: PROCR_ALL_ACCESS, GROUP_PERMISSION: PROCR_READ, OTHER_PERMISSION: PROCR_READ, USER_NAME: root, GROUP_NAME: root}.

Both OLR and OCR files are stored in a tree structure. Just check the file after dump for details, which will not be interpreted here.

| | initialization resource layer cannot be started due to loss of OLR file |

After the olr file is lost, there will be an obvious error message in the alert [hostname] .log log when the cluster starts, as shown below:

Alertnode1.log:

2018-03-26 06 Oracle Local Registry is not configured Storage layer error 15 PROCL-33 17.579: [ohasd (5219)] CRS-0704:Oracle High Availability Service aborted due to Oracle Local Registry error [PROCL-33: Oracle Local Registry is not configured Storage layer error [Error opening olr.loc file. No such file or directory] [2]]. Details at (: OHAS00106:) in / u01/app/11.2.0/grid/log/node1/ohasd/ohasd.log.2018-03-26 0615 u01/app/11.2.0/grid/log/node1/ohasd/ohasd.log.2018 17.733: [ohasd (5230)] CRS-0704:Oracle High Availability Service aborted due to Oracle Local Registry error [PROCL-33: Oracle Local Registry is not configured Storage layer error [Error opening olr.loc file. No such file or directory] [2]]. Details at (: OHAS00106:) in / u01/app/11.2.0/grid/log/node1/ohasd/ohasd.log.2018-03-26 0615 in 17.886: [ohasd (5241)] CRS-0704:Oracle High Availability Service aborted due to Oracle Local Registry error [PROCL-33: Oracle Local Registry is not configured Storage layer error [Error opening olr.loc file. No such file or directory] [2]]. Details at (: OHAS00106:) in / u01/app/11.2.0/grid/log/node1/ohasd/ohasd.log. [client (5256)] CRS-10001:CRS-10132: No msg for has:crs-10132 [10] [60]

It is reported directly in the log that the olr.loc file cannot be opened. (opening olr.loc file. No such file or directory)

[root@node1 ~] # ps-ef | grep-E 'ohasd | agent | gpnp | gipc | mdns' | grep-v greproot 1332 1 0 20:53? 00:00:00 / bin/sh / etc/init.d/init.ohasd run [root@node1 ~] #

In the background process, only the init.ohasd script runs in the background, and all the resources in the initialization resource layer are started. For the loss of olr files, I only need to restore through the backed-up olr files.

| | backup and recovery of OLR files |

OLR files will be backed up automatically after GI software installation or GI upgrade. OLR files will not be backed up automatically like OCR. If there are changes in resources at the initialization level, it is recommended to back up OLR files manually.

1. Backing up OLR manually

Ocrconfig-local-manualbackup

two。 View a backup of OLR files

Ocrconfig-local-showbackup

3. Restore OLR Fil

Ocrconfig-local-restore

Make sure that ohasd.bin is not started when restoring, and if ohasd.bin is still running, use crsctlstop crs to stop GI.

If a PROTL-16:Internal Error error occurs during the recovery of OLR, the restore fails, which may be due to the loss of the olr.loc file, because the olr.loc file is read and the olr file is restored to the location specified by olr.loc when the olr file is restored.

[root@rac1 oracle] # ocrconfig-local-restore / u01/app/11.2.0/grid/cdata/rac1/backup_20180221_045700.olrPROTL-16: Internal Error [root@rac1 oracle] #

If it still fails, try to create a virtual OLR, set the correct ownership and permissions, and then retry the restore command:

# cd # touch .olr # chmod 600 .olr # chown: .olr

4. Verify the integrity of the OLR file

For the integrity verification of OLR files, you can also use CVU provided by Oracle for verification, but the verification here does not check the logical integrity of the contents of OLR files. If you need to verify logical integrity at the same time, you need to use ocrcheck-local to verify.

[grid@node1] $cluvfy comp olrVerifying OLR integrity Checking OLR integrity...Checking OLR config file...OLR config file check successfulChecking OLR file attributes...OLR file check successfulWARNING: This check does not verify the integrity of the OLR contents. Execute 'ocrcheck-local' as a privileged user to verify the contents of OLR.OLR integrity check passedVerification of OLR integrity was successful. [grid@node1 ~] $

Automatic backup of 5.OLR files

In version 12.2.0.1 and above, Grid also began to support automatic backup of OLR files, which is provided as part of BUG, BUG 24799186 (now replaced by BUG 26493466), but in 18.1 and GI RU 12.2.0.1.180116 to include OLR automatic backup.

| | socket file |

Socket file is the endpoint of two-way communication between processes, and it is a convention of inter-process communication. When an Oracle cluster starts, it first reads the OLR file to initialize the startup of the resource layer, and gradually realizes the startup of the cluster. In the process, it will create the socket files needed by the relevant cluster processes in the / var/tmp/.oracle directory.

Socket file is an indispensable file in the process of cluster operation, please do not delete the relevant socket file during the cluster operation, if the socket file is lost, it will lead to some unpredictable problems.

The following test is in the process of running the cluster. After manually deleting all the files in / var/tmp/.oracle, check the cluster status through crsctl and output CRS-4535, CRS-4000 and CRS-4639. The first feeling is that the cluster is not started, but the actual situation is that both the cluster and the database are running normally.

[root@node1 ~] # crsctl stat res-tCRS-4535: Cannot communicate with Cluster Ready ServicesCRS-4000: Command Status failed Or completed with errors. [root@node1 ~] # crsctl check crsCRS-4639: Could not contact Oracle High Availability Services [root@node1 ~] # ps-ef | grep-E 'ohasd | agent | mdns | gpnp | gipc | pmon' | grep-v greproot 1332 10 Jan20? 00:00:00 / bin/sh / etc/init.d/init.ohasd runroot 3829 10 Jan20? 00:01:19 / u01/app/11.2.0/grid/bin/ohasd.bin rebootgrid 3951 10 Jan20? 00:01:10 / u01 / App/11.2.0/grid/bin/oraagent.bingrid 3962 1 0 Jan20? 00:00:00 / u01/app/11.2.0/grid/bin/mdnsd.bingrid 3973 1 0 Jan20? 00:00:11 / u01/app/11.2.0/grid/bin/gpnpd.bingrid 3984 1 0 Jan20? 00:01:43 / u01/app/11.2.0/grid/bin/gipcd.binroot 3986 1 0 Jan20? 00:02:18 / u01/ App/11.2.0/grid/bin/orarootagent.binroot 4030 1 0 Jan20? 00:00:16 / u01/app/11.2.0/grid/bin/cssdagentgrid 4390 1 0 Jan20? 00:00:05 asm_pmon_+ASM1grid 4559 1 0 Jan20? 00:02:03 / u01/app/11.2.0/grid/bin/oraagent.binroot 4567 1 0 Jan20? 00:02:17 / u01/app/11.2.0/grid/bin/orarootagent.binoracle 4769 1 0 Jan20? 00:01:44 / u01/app/11.2.0/grid/bin/oraagent.binoracle 4832 10 Jan20? 00:00:07 ora_pmon_ oraapp1 [root @ node1 ~] #

For the abnormal operation of the cluster and other problems caused by the loss of socket files, the easiest way is to restart the cluster, and the cluster will recreate the required socket files at startup.

At this point, the study on "OLR and socket file analysis in Oracle cluster" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report