Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Manage OCR and VoteDisk

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Oracle Clusterware consists of two parts, Voting Disk and OCR. The information of node members is recorded in Voting Disk. Such as which node members are in the RAC database, the information will also be recorded when the node is added or deleted. The Voting Disk must be stored on shared storage, usually on a bare device. In order to ensure the security of Voting Disk, you need to configure multiple Voting Disk,Oracle. It is recommended that the number of Voting Disk should be odd, such as 1,3,5, and the size of each Voting Disk is about 20MB.

OCR records the configuration information of node members, such as database, ASM, instance, listener, VIP and other CRS resources. The information managed by the CRS process comes from the content of the OCR. The configuration information stored by OCR records a series of "key-value" corresponding information in the form of a directory tree. OCR records all configuration information for CRS process management resources. The size is about 100MB.

Changes to Voting Disk must be made using root.

Add a Voting Disk member: crsctl add css votedisk / dev/raw/raw3'

Delete a Voting Disk member: crsctl delete css votedisk / dev/raw/raw3'

Additions, deletions, and alternative configurations of OCR can be operated by root users through ocrconfig-replace:

Ocrconfig-replace / dev/raw/raw1

Note: adding and deleting Voting Disk must be done online.

The information stored in Voting Disk and OCR is critical, and if they are lost or damaged, Clusterware will not start and the entire RAC will not start. Therefore, a complete backup of Voting Disk and OCR is required.

The backup of Voting Disk can be done through the dd command.

View location:

# crsctl query css votedisk

Backup operation:

# dd if=/dev/raw/raw2 f=/home/oracle/voting_disk.bak

Restore operation:

# dd if=/home/oracle/voting_disk.bak f=/dev/raw/raw2

By default, RAC automatically backs up OCR every 4 hours on one of the nodes. And keep the last 3 backups, as well as the last two days and the last two weekends. This can be seen with the command ocrconfig-showbackup. Because of the importance of ocr information, OCR automatically backs up ocr memory every 4 hours by default, retains the latest 3 backups, and retains backups from the last 2 weekends. The default backup path for backup is $CRS_HOME/cdata/crs. You can change the backup path of OCR through ocrconfig-backuploc. The interval between OCR automatic backups cannot be modified. We can also use ocrconfig-export to OCR content, or we can import OCR content through ocrconfig-import.

OCR can be restored with the ocrconfig-restore command.

. / ocrconfig-restore / u01/oracle/product/10g/crs/cdata/crs/backup00.ocr

Add Voting Disk:

[root@rhel1 bin] #. / crsctl add css votedisk / u01/ocfs2fs/vdisk2

Cluster is not in a ready state for online disk addition

[root@rhel1 bin] #. / crsctl add css votedisk / u01/ocfs2fs/vdisk2-force

Now formatting voting disk: / u01/ocfs2fs/vdisk2

Successful addition of votedisk / u01/ocfs2fs/vdisk2.

Chown oracle:oinstall / u01/ocfs2fs/vdisk2

Chmod 775 / u01/ocfs2fs/vdisk2

[root@rhel1 bin] # / etc/init.crs start

[root@rhel1 bin] #. / crsctl query css votedisk

0. 0 / u01/ocfs2fs/vdisk

1. 0 / u01/ocfs2fs/vdisk2

Note: votedisk can have multiple members. Add voting disk must be done with all services turned off. The number of votedisk created should be odd, and only the crs that satisfies more than half of the votedisk is normal. For example, if you have four votedisk, one will work properly if it is broken, and two will not work properly if it is broken. If there are five, if three are broken, they will not work properly.

Add an OCR image:

[root@rhel1 bin] # touch / u01/ocfs2fs/ocr2

[root@rhel1 bin] #. / ocrconfig-replace ocrmirror / u01/ocfs2fs/ocr2

[root@rhel1 bin] #. / ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 2

Total space (kbytes): 262120

Used space (kbytes): 2756

Available space (kbytes): 259364

ID: 2062708016

Device/File Name: / u01/ocfs2fs/ocr

Device/File integrity check succeeded

Device/File Name: / u01/ocfs2fs/ocr2

Device/File integrity check succeeded

Cluster registry integrity check succeeded

Note: add ocr can be done online. There can be a maximum of 2 ocr, one primary ocr and one mirror ocr.

Delete the OCR image:

[root@rhel1 bin] #. / ocrconfig-replace ocrmirror

[root@rhel1 bin] #. / ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 2

Total space (kbytes): 262120

Used space (kbytes): 2756

Available space (kbytes): 259364

ID: 2062708016

Device/File Name: / u01/ocfs2fs/ocr

Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

[root@rhel1 bin] # cat / etc/oracle/ocr.loc

# Device/file / u01/ocfs2fs/ocr2 being deleted

Ocrconfig_loc=/u01/ocfs2fs/ocr

Delete Voting Disk:

[root@rhel1 bin] #. / crsctl delete css votedisk / u01/ocfs2fs/vdisk2

Cluster is not in a ready state for online disk removal

[root@rhel1 bin] #. / crsctl stop crs

Stopping resources. This could take several minutes.

Successfully stopped CRS resources.

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[root@rhel1 bin] #. / crsctl delete css votedisk / u01/ocfs2fs/vdisk2

Cluster is not in a ready state for online disk removal

[root@rhel1 bin] #. / crsctl delete css votedisk / u01/ocfs2fs/vdisk2-force

Successful deletion of votedisk / u01/ocfs2fs/vdisk2.

[root@rhel1 bin] #. / crsctl query css votedisk

0. 0 / u01/ocfs2fs/vdisk

Located 1 votedisk (s).

Backup, restore Voting Disk:

[root@rhel1 bin] #. / crsctl check crs

Failure 1 contacting CSS daemon

Cannot communicate with CRS

Cannot communicate with EVM

[root@rhel1 bin] # dd if=/u01/ocfs2fs/vdisk f=/home/oracle/vdisk_bak / / backup, ocfs2 can also be directly copy

20000's 0 records in

20000's 0 records out

10240000 bytes (10 MB) copied, 0.301106 seconds, 34.0MB/s

[root@rhel1 bin] # rm-rf / u01/ocfs2fs/vdisk

[root@rhel1 bin] # dd if=/home/oracle/vdisk_bak f=/u01/ocfs2fs/vdisk / / restore. Ocfs2 can also copy directly.

20000's 0 records in

20000's 0 records out

10240000 bytes (10 MB) copied, 0.173678 seconds, 59.0 MB/s

[root@rhel1 bin] # chown oracle:oinstall / u01/ocfs2fs/vdisk

[root@rhel1 bin] # chmod 775 / u01/ocfs2fs/vdisk

[root@rhel1 bin] # / etc/init.d/init.crs start

Startup will be queued to init within 30 seconds.

[root@rhel1 bin] #. / crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

Backup, restore OCR:

Backup:

When we do any major operations on the cluster, for example, Clusterware upgrade, environment migration, etc., we have to back up Votedisk and OCR. The backup method of Votedisk is described above, and the backup and recovery of OCR are introduced here.

OCR has an automatic backup mechanism that includes the following rules:

1. Automatically generate an OCR backup every 4 hours and keep the last 3 backups.

The 2.CRSD process also generates OCR backups at the beginning of each day and retains the last two backups.

The 3.CRSD process also generates OCR backups at the beginning of each week and retains the last two backups.

Note that OCR automatic backups are retained on only one of the nodes, and not every node has an automatic backup.

You can view the contents of OCR automatic backups through OCRCONFIG:

[root@rhel1] # / oracle/app/crs/bin/ocrconfig-showbackup

Rhel1 2010-10-26 05:20:57 / oracle/app/crs/cdata/crs

Rhel1 2010-10-26 01:20:56 / oracle/app/crs/cdata/crs

Rhel2 2010-10-24 22:34:18 / oracle/app/crs/cdata/crs

Rhel1 2010-10-26 01:20:56 / oracle/app/crs/cdata/crs

Rhel1 2010-10-22 23:54:15 / oracle/app/crs/cdata/crs

You can see that ocr backups are all distributed in rhel1,rhel2.

If you change to the automatic backup directory, you can see that there are a total of 7 backups, each 4-hour backup, the last 3 backups, the OCR backup generated on the same day, and the last 2 backups. And then there are the weekly backups, keeping the last two. A total of seven.

[root@rac1 ~] # ls-l / oracle/app/crs/cdata/crs

Total 23560

-rw-r--r-- 1 root root 4812800 Oct 26 05:20 backup00.ocr

-rw-r--r-- 1 root root 4812800 Oct 26 01:20 backup01.ocr

-rw-r--r-- 1 root root 4812800 Oct 24 15:54 backup02.ocr

-rw-r--r-- 1 root root 4812800 Oct 27 02:10 day_.ocr

-rw-r--r-- 1 root root 4812800 Oct 26 01:20 day.ocr

-rw-r--r-- 1 root root 4812800 Oct 29 23:04 week_.ocr

-rw-r--r-- 1 root root 4812800 Oct 22 23:54 week.ocr

In addition to the automatic backup above, manual backups are required when performing large operations, and logical manual backups are performed through the OCRCONFIG command:

[root@rhel1] # / oracle/app/crs/bin/ocrconfig-export ocr_logical_backup-s online

[root@rhel1 ~] # ls-l ocr_logical_backup

-rw-r--r-- 1 root root 103969 Nov 5 20:10 ocr_logical_backup

If you are backing up OCR logically, add the-s online parameter to ensure the consistency of OCR.

Verify the consistency of the OCR using:

/ oracle/app/crs/bin/cluvfy comp ocr-n all

In addition, if we are using the ocfs2 cluster file system, we can copy the ocr,vdisk disk files directly. If it is a bare device, it is also possible to use the dd command to back up ocr.

Restore:

1. Restore automatic backup files:

[root@rhel1 bin] #. / crsctl stop crs

Stopping resources. This could take several minutes.

Successfully stopped CRS resources.

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

[root@rhel1 bin] # cat / etc/oracle/ocr.loc

# Device/file / u01/ocfs2fs/ocr2 being deleted

Ocrconfig_loc=/u01/ocfs2fs/ocr

Local_only=false

[root@rhel1 bin] # mv / u01/ocfs2fs/ocr / u01/ocfs2fs/ocr_bak

[root@rhel1 crs_1] # cd cdata

[root@rhel1 cdata] # cd crs

[root@rhel1 crs] # ll

Total 23144

-rwxrwxr-x 1 oracle dba 3514368 Mar 1 11:00 13968559

-rwxrwxr-x 1 oracle dba 3514368 Mar 1 07:00 33426182

-rw-r--r-- 1 root root 3182592 Mar 8 15:17 34809936

-rwxrwxr-x 1 oracle dba 3514368 Feb 27 09:38 backup00.ocr

-rwxrwxr-x 1 oracle dba 3514368 Feb 8 06:02 backup01.ocr

-rwxrwxr-x 1 oracle dba 2142208 Nov 8 19:25 backup02.ocr

-rwxrwxr-x 1 oracle dba 2142208 Nov 8 19:25 day.ocr

-rwxrwxr-x 1 oracle dba 2142208 Nov 8 19:25 week.ocr

[root@rhel1 crs] # cd.. /.. / bin

[root@rhel1 bin] #. / ocrconfig-restore / u01/app/oracle/crs_1/cdata/crs/backup02.ocr

PROT-16: Internal Error

[root@rhel1 bin] # touch / u01/ocfs2fs/ocr

[root@rhel1 bin] #. / ocrconfig-restore / u01/app/oracle/crs_1/cdata/crs/backup02.ocr

[root@rhel1 bin] #. / crsctl start crs

Attempting to start CRS stack

The CRS stack will be started shortly

[root@rhel1 bin] #. / crsctl check crs

CSS appears healthy

CRS appears healthy

EVM appears healthy

two。 Restore logical backups:

[root@rhel1] # / oracle/app/crs/bin/ocrconfig-import logical_backup.ocr

You should get into the habit of backing up OCR and VOTEDISK before doing big operations on Clusterware, and you should have both physical and logical backups.

Note: ocrconfig also needs to stop running crs when using-export and-import backup and restore to complete. If you want to do this online, you need to specify-export [- s online] when-export.

Other:

. / ocrconfig-showbackup / / View automatic backups.

. / ocrconfig-backuploc / / change the automatic backup directory.

/ / heartbeat timeout

[oracle@rhel1 bin] $. / crsctl get css misscount

sixty

/ / View the corresponding module of crs css evm

Crsctl lsmodules css | crs | evm

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report