Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

There are several command sets for clusterware in oracle

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you there are several command sets of clusterware in oracle, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

The command set of oracle clusterware can be divided into the following four types:

Node layer: osnodes

Network layer: oifcfg

Cluster layer: crsctl, ocrcheck,ocrdump,ocrconfig

Application layer: srvctl,onsctl,crs_stat

These commands are described below.

1. Node layer

Olsnodes, this command is used to display a list of cluster points. The available parameters are as follows: grid user execution

$olsnodes-h

Usage: olsnodes [- n] [- p] [- I] [|-l] [- g] [- v]

-n print node number and node name

-p print dedicated interconnection names and node names

-I print virtual ip names and node names

Print information for the specified node

-l print information about the local node

-g enable event logging

-v runs in verbose mode

-- these parameters can be mixed, as follows:

[grid@dbrac2 bin] $which olsnodes

/ oracle/app/11.2.0/grid/bin/olsnodes

[grid@dbrac2 bin] $

[grid@dbrac2 bin] $olsnodes

Dbrac1

Dbrac2

[grid@dbrac2 bin] $olsnodes-n

Dbrac1 1

Dbrac2 2

[grid@dbrac2 bin] $

[grid@dbrac2 bin] $olsnodes-s

Dbrac1 active

Dbrac2 active

[grid@dbrac2 bin] $

2. Network layer

The network layer consists of the network components of each node, including 2 physical network cards and 3 ip addresses. There is only one command: oifcfg

The format of the oifcfg command is as follows:

Usage:

[grid@dbrac2 bin] $oifcfg

Name:

Oifcfg-oracle interface configuration tool.

Usage: oifcfg iflist [- p [- n]]

Oifcfg setif {- node |-global} {/:}...

Oifcfg getif [- node |-global] [- if [/] [- type]]

Oifcfg delif {{- node |-global} [[/]] [- force] |-force}

Oifcfg [- help]

-name of the host, as known to a communications network

-name by which the interface is configured in the system

-subnet address of the interface

-type of the interface {cluster_interconnect | public}

-Hostname known to the communication network

-name of the interface configured in the system (interface_name network interface includes name)

-Subnet address of the interface (subnet segment address)

-Interface type {cluster_interconnect | public | storage} (interface_type interface type)

The oifcfg command is used to define and modify the attributes of the network card required by the oracle cluster, including the network segment address, subnet mask, interface type, and so on.

To use this command correctly, you must first know how oracle defines the network interface. Each network interface of oracle includes three attributes: name, network segment address, and interface type: interface_name/subnet:interface_type.

There are no ip addresses in these attributes, but there are two interface types, public and private, which indicate that the interface is used for external communication, for oracle net and vip addresses, and that the interface is used for interconnect.

Interfaces are configured in two ways:

Global and node-specific. The former means that the configuration information of all nodes in the cluster is the same, that is, the configuration of all nodes is symmetrical, while the latter means that the configuration of this node is different from that of other nodes and is asymmetric.

Iflist: displays a list of network ports

Getif: get a single network port information

Setif: configuring a single network port

Delif: delete network port

-- displays a list of network ports

[grid@dbrac1 ~] $oifcfg iflist

Eth0 192.168.56.0

Eth2 10.10.10.0

Eth2 169.254.0.0

-- display network port information

[grid@dbrac1 ~] $oifcfg getif

Eth0 192.168.56.0 global public

Eth2 10.10.10.0 global cluster_interconnect

[grid@dbrac1 ~] $

-- View the network card of public type

[grid@dbrac2 bin] $oifcfg getif-type public

Eth0 192.168.56.0 global public

-- Delete interface configuration

[grid@dbrac2 bin] $oifcfg delif-global

-- add interface configuration

[grid@dbrac2 bin] $oifcfg setif-global eth0/192.168.1.119:public

[grid@dbrac2 bin] $oifcfg setif-globaleth2/10.85.10.119:cluster_interconnect

3. Cluster layer

The cluster layer refers to the core cluster composed of clusterware, which is responsible for maintaining the shared devices in the cluster and providing a complete cluster status view for the application cluster. The application cluster is adjusted according to this view. There are four commands in this layer: crsctl, ocrcheck,ocrdump,ocrconfig. The last three are for ocr disks.

3.1 crsctl

The crsctl command can be used to check the crs process stack, the status of each crs process, manage votedisk, and track crs process functions.

3.1.1 check crs status

[grid@dbrac2 bin] $crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[grid@dbrac2 bin] $

-- check individual statu

[grid@dbrac2 bin] $crsctl check cssd

CRS-272: This command remains for backward compatibility only

Cluster Synchronization Services is online

[grid@dbrac2 bin] $crsctl check crsd

CRS-272: This command remains for backward compatibility only

Cluster Ready Services is online

[grid@dbrac2 bin] $crsctl check evmd

CRS-272: This command remains for backward compatibility only

Event Manager is online

[grid@dbrac2 bin] $

3.1.2 configure whether the crs stack is self-starting

The crs process stack defaults to self-starting as the operating system starts, and sometimes needs to be turned off for maintenance purposes. You can use the root user to execute the following command.

[grid@dbrac2 bin] $crsctl disable crs

[grid@dbrac2 bin] $crsctl enable crs

3.1.3 start and stop the crs stack

Oracle at 10.1, you must restart clusterware by restarting the system, but starting with oracle 10.2, you can start and stop crs with a command.

-- start crs:

[grid@dbrac2 bin] $crsctl start crs

Attempting to start crs stack

The crs stack will be started shortly

-- close crs:

[grid@dbrac2 bin] $crsctl stop crs

Stopping resources.

Successfully stopped crs resources

Stopping cssd.

Shutting down css daemon.

Shutdown request successfully issued.

3.1.4 View votedisk disk location

[grid@dbrac2 bin] $crsctl query css votedisk

# # STATE File Universal Id File Name Disk group

1. ONLINE 844bb4b723954f31bf8e6a0002e335aa (/ dev/asm_ocrvote1) [OCRVOTE]

2. ONLINE 4a88237c23a84fe5bf8f235e84d60b5b (/ dev/asm_ocrvote2) [OCRVOTE]

3. ONLINE 8398f24ab1e34faebf890fe7a7ef7919 (/ dev/asm_ocrvote3) [OCRVOTE]

Located 3 voting disk (s).

[grid@dbrac2 bin] $

3.1.5 View and modify crs parameters

-- View parameters: use get

[grid@dbrac2 bin] $crsctl get css misscount

CRS-4678: Successful get misscount 30 for Cluster Synchronization Services.

[grid@dbrac2 bin] $

[grid@dbrac2 bin] $crsctl get css disktimeout

CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services.

-- modify parameters: use set, but this function should be used with caution

[grid@dbrac2 bin] $crsctl set css miscount 60

3.1.6 track the crs module to provide accessibility

Crs consists of three crs,css,evm services, each of which is composed of a series of module. Crsctl allows each module to be tracked and recorded in a log.

[grid@dbrac2 bin] $crsctl lsmodules css

[grid@dbrac2 bin] $crsctl lsmodules evm

-- tracking cssd module, which needs to be executed by root users:

[grid@dbrac2 bin] $crsctl debug log css "cssd:1"

Configuration parameter trace is now set to 1.

Set crsd debug module: cssd level: 1

-- View the trace log

[root@rac1 cssd] # pwd

/ u01/app/oracle/product/crs/log/rac1/cssd

[root@rac1 cssd] # more ocssd.log

...

3.1.7 maintain votedisk

During the graphical installation of clusterware, when configuring votedisk, you can only fill in one votedisk if you select the external redundancy policy. However, even if external redundancy is used as a redundancy policy, multiple vodedisk can be added, but must be added through the crsctl command. After adding multiple votedisk, these votedisk are mirrored to each other to prevent the single point of failure of votedisk.

It should be noted that votedisk uses a "mostly available algorithm", and if there is more than one votedisk, more than half of the votedisk must be used at the same time for clusterware to work properly. For example, if 4 votedisk are configured and a bad votedisk is configured, the cluster can work normally. If 2 are broken, more than half of them cannot be satisfied. The cluster will crash immediately and all nodes will restart immediately. Therefore, if you add votedisk, try not to add only one, but 2. This is different from ocr. Only one ocr needs to be configured.

The operation of adding and removing votedisk is dangerous, you must stop the database, stop asm, stop the operation after crs stack, and you must use the-force parameter.

1) View current configuration

[grid@dbrac2 ~] $crsctl query css votedisk

# # STATE File Universal Id File Name Disk group

1. ONLINE 844bb4b723954f31bf8e6a0002e335aa (/ dev/asm_ocrvote1) [OCRVOTE]

2. ONLINE 4a88237c23a84fe5bf8f235e84d60b5b (/ dev/asm_ocrvote2) [OCRVOTE]

3. ONLINE 8398f24ab1e34faebf890fe7a7ef7919 (/ dev/asm_ocrvote3) [OCRVOTE]

Located 3 voting disk (s).

[grid@dbrac2 ~] $

2) stop the crs of all nodes:

[grid@dbrac2 bin] $crsctl stop crs

3) add votedisk

[grid@dbrac2 bin] $crsctl add css votedisk / dev/raw/rac1-force

Note: even after crs is closed, the votedisk must be added and removed with the-force parameter, and the-force parameter is safe to use only when crs is turned off, otherwise it will report: cluter is not a ready state for online disk addition.

4) confirm the situation after being added:

[grid@dbrac2 bin] $crsctl query css votedisk

5) start crs

[grid@dbrac2 bin] $crsctl start crs

-- you can change the voting disk configuration dynamically. If you add a new voting disk, use the following command:

# crsctl add css votedisk

-- to delete the voting disk, use the following command:

# crsctl delete css votedisk

-- if Oracle Clusterware is turned off on all nodes, use the-force option:

# crsctl add css votedisk-force

# crsctl delete css votedisk-force

6) backup and restore voting disks:

You can use the dd command to back up a voting disk when

-after installing Oracle Clusterware

-after adding or removing nodes

-can be executed online

$crsctl query css votedis

$dd if= of= bs=4k

You can restore the voting disk by using the dd command to restore the first voting disk, and then multiplexing the disk as needed. If no voting disk backup is available, Oracle Clusterware should be reinstalled.

3.2 ocr Command Series

Oracle clusterware puts the configuration information of the entire cluster on the shared storage, which is called ocr disk. In the entire cluster, only one node can read and write to the ocr disk. This node is called master node, and all nodes keep a copy of the ocr in memory, while an ocr process reads from that memory. When the content of the ocr changes, the ocr process of master node is responsible for synchronizing to the ocr process of other nodes.

Because the contents of ocr are so important, oracle makes a backup of it every four hours and retains the last three backups, as well as the last backup the day before and the week before. This backup is done by the master node crsd process, and the default location for the backup is under the $crs_home\ cdata\ directory. After each backup, the backup file name automatically changes to reflect the chronological order of the backup, and the most recent backup is called backup00.ocr. In addition to keeping these backup files locally, dba should keep a copy of these backup files on other storage devices to prevent accidental storage failures.

3.2.1 ocrdump

This command can print out the contents of ocr as ascii, but this command cannot be used as a backup restore of ocr, which means that the resulting files can only be used for reading, not for recovery. +

Command format:

[grid@acctdb01 ~] $ocrdump-help

Name:

Ocrdump-Dump contents of Oracle Cluster/Local Registry to a file.

Synopsis:

Ocrdump [- local] [|-stdout] [- backupfile] [- keyname] [- xml] [- noheader]

Description:

Default filename is OCRDUMPFILE. Examples are:

Prompt > ocrdump

Writes cluster registry contents to OCRDUMPFILE in the current directory

Prompt > ocrdump MYFILE

Writes cluster registry contents to MYFILE in the current directory

Prompt > ocrdump-stdout-keyname SYSTEM

Writes the subtree of SYSTEM in the cluster registry to stdout

Prompt > ocrdump-local-stdout-xml

Writes local registry contents to stdout in xml format

Prompt > ocrdump-backupfile / oracle/CRSHOME/backup.ocr-stdout-xml

Writes registry contents in the backup file to stdout in xml format

Notes:

The header information will be retrieved based on best effort basis.

A log file will be created in

$ORACLE_HOME/log//client/ocrdump_.log. Make sure

You have file creation privileges in the above directory before

Running this tool.

Use option'- local' to indicate that the operation is to be performed on the Oracle Local Registry.

[grid@acctdb01 ~] $

Parameter description:

-stdout: print the content to the screen

Filename: content output to file

-keyname: print only the contents of a key and its subkeys

-xml: print output in xml format

Example: print the contents of the system.css key to the screen in .xml format

[grid@dbrac2 bin] $ocrdump-stdout-keyname system.css-xml | more

……

During the execution of this command, a log file named ocrdump_.log will be generated in the $crs_home\ log\\ client directory. If there is a problem with the command execution, you can check the cause of the problem from this log.

3.2.2 ocrcheck (executed as root command)

The ocrcheck command is used to check the consistency of ocr contents, and the execution of the command generates ocrcheck_pid.log log files in the $crs_home\ log\ nodename\ client directory. This command takes no arguments.

Syntax:

[grid@dbrac2] $ocrcheck-h

Name:

Ocrcheck-Displays health of Oracle Cluster/Local Registry.

Synopsis:

Ocrcheck [- config] [- local]

-config Displays the configured locations of the Oracle Cluster Registry.

This can be used with the-local option to display the configured

Location of the Oracle Local Registry

-local The operation will be performed on the Oracle Local Registry.

Notes:

A log file will be created in

$ORACLE_HOME/log//client/ocrcheck_.log.

File creation privileges in the above directory are needed

When running this tool.

-- root execution

[grid@dbrac2 ~] $

[root@dbrac2 ~] # / oracle/app/11.2.0/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 3

Total space (kbytes): 262120

Used space (kbytes): 3120

Available space (kbytes): 259000

ID: 6988404

Device/File Name: + OCRVOTE

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@dbrac2 ~] #

3.2.3 ocrconfig

This command is used to maintain ocr disks, and during clusterware installation, if you choose external redundancy redundancy, you can enter only one ocr disk location. However, oracle allows you to configure two ocr disks to mirror each other to prevent a single point of failure of ocr disks. Unlike votedisk disks, ocr disks can only have a maximum of two ocr disks, one primary ocr and one mirror ocr.

[grid@dbrac2 ~] $ocrconfig-help

Name:

Ocrconfig-Configuration tool for Oracle Cluster/Local Registry.

Synopsis:

Ocrconfig [option]

Option:

[- local]-export

-Export OCR/OLR contents to a file

[- local]-import-Import OCR/OLR contents from a file

[- local]-upgrade [[]]

-Upgrade OCR from previous version

-downgrade [- version]

-Downgrade OCR to the specified version

[- local]-backuploc-Configure OCR/OLR backup location

[- local]-showbackup [auto | manual]-Show OCR/OLR backup information

[- local]-manualbackup-Perform OCR/OLR backup

[- local]-restore-Restore OCR/OLR from physical backup

-replace-replacement

-Replace an OCR device or file with

-add-Add a new OCR device/file

-delete-Remove an OCR device/file

-overwrite-Overwrite OCR configuration on disk

-repair-add |-delete |-replace-replacement

-Repair OCR configuration on the local node

-help-Print out this help information

Note:

* A log file will be created in

$ORACLE_HOME/log//client/ocrconfig_.log. Please ensure

You have file creation privileges in the above directory before

Running this tool.

* Only-local-showbackup [manual] is supported.

* Use option'- local' to indicate that the operation is to be performed on the Oracle Local Registry.

[grid@dbrac2 ~] $

-- View self-service backup

By default, ocr is automatically backed up in the $crs_home\ cdata\ cluster_name directory, which can be modified to a new directory with the ocrconfig-backuploc command

[grid@dbrac2 ~] $ocrconfig-showbackup

Dbrac2 2018-01-12 03:10:52 / oracle/app/11.2.0/grid/cdata/dbrac-cluster/backup00.ocr

Dbrac2 2017-10-20 00:32:00 / oracle/app/11.2.0/grid/cdata/dbrac-cluster/backup01.ocr

Dbrac2 2017-10-19 15:09:44 / oracle/app/11.2.0/grid/cdata/dbrac-cluster/backup02.ocr

Dbrac2 2018-01-12 03:10:52 / oracle/app/11.2.0/grid/cdata/dbrac-cluster/day.ocr

Dbrac2 2018-01-12 03:10:52 / oracle/app/11.2.0/grid/cdata/dbrac-cluster/week.ocr

PROT-25: Manual backups for the Oracle Cluster Registry are not available

[grid@dbrac2 ~] $

3.2.4 backup and restore using export, import (performed by root users)

Oracle recommends that when making adjustments to the cluster, such as adding or deleting nodes, you should make a backup of ocr. You can use export to back up to the specified file. If you do operations such as replace or restore, oracle recommends using the cluvfy comp ocr-n all command to do a comprehensive check. This command is in the installation software of clusterware.

1) first shut down the crs of all nodes

[root@dbrac1 ~] # / oracle/app/11.2.0/grid/bin/crsctl stop crs

[root@dbrac2 ~] # / oracle/app/11.2.0/grid/bin/crsctl stop crs

2). Export ocr content with root users

[root@dbrac2] # / oracle/app/11.2.0/grid/bin/ocrconfig-export / home/grid/ocr.exp

-- View

[grid@dbrac2 ~] $pwd

/ home/grid

[grid@dbrac2 ~] $ls-lrt ocr*

-rw- 1 root root 122854 Jan 12 03:42 ocr.exp

[grid@dbrac2 ~] $

3) restart crs

[root@dbrac1 ~] # / oracle/app/11.2.0/grid/bin/crsctl start crs

[root@dbrac2 ~] # / oracle/app/11.2.0/grid/bin/crsctl start crs

4) check the status of crs

[grid@dbrac2 ~] $crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[grid@dbrac2 ~] $

5) destroy OCR content

[root@rac1 bin] # dd if=/dev/zero of=/dev/raw/rac1 bs=1024 count=102400

102400 to 0 records in

102400 to 0 records out

6) check ocr consistency

[root@dbrac2 ~] # ocrcheck

Prot-601: failed to initialize ocrcheck

7) use the cluvfy tool to check consistency

[root@dbrac2] # runcluvfy.sh comp ocr-n all

Verifying ocr integrity

Unable to retrieve nodelist from oracle clusterware.

Verification cannot proceed.

8) use import to restore ocr content

[root@dbrac2 ~] # ocrconfig-import / home/grid/ocr.exp

9) check ocr again

[root@dbrac2 ~] # ocrcheck

10) check using the cluvfy tool

[root@rac1 cluvfy] # runcluvfy.sh comp ocr-n all

3.2.5 move the ocr file location

An example demonstrates moving ocr from / dev/raw/rac1 to / dev/raw/raw3

1) check whether there is an ocr backup

[grid@dbrac2 bin] $ocrconfig-showbackup

If you do not have a backup, you can immediately perform an export as a backup:

[root@dbrac2] # ocrconfig-export / u01/ocrbackup-s online

2) View the current ocr configuration

[root@dbrac2 ~] # ocrcheck

3) add a mirror ocr

[root@dbrac2 ~] # ocrconfig-replace ocrmirror / dev/raw/raw4

4) confirm that the addition is successful

[grid@dbrac2 bin] $ocrcheck

5) change the location of primary ocr

[root@dbrac2 ~] # ocrconfig-replace ocr / dev/raw/raw3

Confirm that the modification is successful:

[grid@dbrac2 bin] $ocrcheck

6) after being modified with the ocrconfig command, the contents of the / etc/oracle/ocr.loc files on all rac nodes will also be automatically synchronized. If there is no automatic synchronization, you can manually change it to the following.

[grid@dbrac2 bin] $more / etc/oracle/ocr.loc

Ocrconfig_loc=/dev/raw/rac1

Ocrmirrorconfig_loc=/dev/raw/raw3

Local_only=false

Summary:

Restore OCR using a physical backup

1. Locate the physical backup:

$ocrconfig-showbackup

two。 Check its contents:

# ocrdump-backupfile file_name

3. Stop Oracle Clusterware on all nodes:

# crsctl stop

4. Restore the OCR physical backup:

# ocrconfig-restore / cdata/jfv_clus/day.ocr

5. Restart Oracle Clusterware on all nodes:

# crsctl start crs

6. Check OCR integrity:

$cluvfy comp ocr-n all

Restore OCR using logical backup

1. Locate the logical backup created using the OCR export file.

two。 Stop Oracle Clusterware on all nodes:

# crsctl stop crs

3. Restore a logical OCR backup:

# ocrconfig-import / shared/export/ocrback.dmp

4. Restart Oracle Clusterware on all nodes:

# crsctl start crs

5. Check OCR integrity:

$cluvfy comp ocr-n all

Replace OCR

# ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 2

Total space (kbytes): 200692

Used space (kbytes): 3752

Available space (kbytes): 196940

ID: 495185602

Device/File Name: / oradata/OCR1

Device/File integrity check succeeded

Device/File Name: / oradata/OCR2

Device/File needs to be synchronized with the other device

# ocrconfig-replace ocrmirror / oradata/OCR2

Repair OCR configuration

1. Stop the Oracle Clusterware on node 2:

# crsctl stop crs

two。 Add an OCR mirror from Node 1:

# ocrconfig-replace ocrmirror / OCRMirror

3. Repair the OCR mirror location on node 2:

# ocrconfig-repair ocrmirror / OCRMirror

4. Start Oracle Clusterware on node 2:

# crsctl start crs

Considerations for OCR

1. If you use a bare device to restore OCR files, you need to ensure that the files already exist before performing an add or replace operation.

two。 When using ocrconfig, you must have the root user identity to add, replace, or delete OCR files.

3. When you add or replace OCR files, their mirrors need to be online.

4. If you delete the OCR master file, the OCR image file becomes the master file.

5. The last reserved OCR file cannot be deleted at any time.

4. Application layer

The application layer refers to the rac database. This layer consists of several resources, each of which is a complete service composed of a process or a group of processes. The management and maintenance of this layer are carried out around these resources. There are three commands: srvctl, onsctl, and crs_stat.

4.1.1 crs_stat

The crs_stat command is used to view the running status of all resources maintained by crs and to display summary information for all resources without any parameters. Each resource is displayed as individual attributes: resource name, type, directory, resource running status, etc.

[grid@dbrac2 bin] $crs_stat

……

You can also specify a resource name, view the status of the specified resource, and use the-v and-p options to view details, where the-p parameter shows more detail than-v.

1) View the status of established resources

[grid@dbrac2 ~] $crs_stat ora.dbrac2.vip

NAME=ora.dbrac2.vip

TYPE=ora.cluster_vip_net1.type

TARGET=ONLINE

STATE=ONLINE on dbrac2

[grid@dbrac2 ~] $

2) use the-v option to view the details, and then output 4 more items, namely, the number of restarts allowed, the number of restarts performed, the failure threshold, and the number of failures.

[grid@dbrac2] $crs_stat-v ora.dbrac2.vip

NAME=ora.dbrac2.vip

TYPE=ora.cluster_vip_net1.type

RESTART_ATTEMPTS=0

RESTART_COUNT=0

FAILURE_THRESHOLD=0

FAILURE_COUNT=0

TARGET=ONLINE

STATE=ONLINE on dbrac2

[grid@dbrac2 ~] $

3) use the-p option to view more details

[grid@dbrac2] $crs_stat-p ora.dbrac2.vip

NAME=ora.dbrac2.vip

TYPE=ora.cluster_vip_net1.type

ACTION_SCRIPT=

ACTIVE_PLACEMENT=1

AUTO_START=restore

CHECK_INTERVAL=1

DESCRIPTION=Oracle VIP resource

FAILOVER_DELAY=0

FAILURE_INTERVAL=0

FAILURE_THRESHOLD=0

HOSTING_MEMBERS=dbrac2

PLACEMENT=favored

RESTART_ATTEMPTS=0

SCRIPT_TIMEOUT=60

START_TIMEOUT=120

STOP_TIMEOUT=0

UPTIME_THRESHOLD=1h

[grid@dbrac2 ~] $

These fields are common to all resources, but some fields can be null, depending on the resource type.

4) using the-ls option, you can view the permission definition of each resource in the same format as linux.

[grid@dbrac2 ~] $crs_stat-ls ora.dbrac2.vip

Name Owner Primary PrivGrp Permission

-

Ora.dbrac2.vip root root rwxr-xr--

[grid@dbrac2 ~] $

This command is used to manage the configuration ons (oracle notification service). Ons is the basis for oracle clusterware to implement the fan event push model. In the traditional model, the client needs to check the server periodically to judge the state of the server, which is essentially a pull model. Oracle 10g introduces a new push mechanism, ╟ fan (fast application notification). When something happens on the server, the server will actively notify the client of the change, so that the client can know the change of the server as soon as possible. The introduction of this mechanism relies on the ons implementation, and the ons service needs to be configured before using the onsctl command.

4.1.2.1 ons configuration content

In a rac environment, you need to use ons under $crs_home instead of ons under $oracle_home, which needs to be noted. The configuration file is in $crs_home\ opmn\ conf\ ons.config.

[grid@acctdb01 conf] $pwd

/ oracle/app/11.2.0/grid/opmn/conf

[grid@acctdb01 conf] $ls

Ons.config ons.config.acctdb01 ons.config.acctdb01.bak ons.config.bak.acctdb01

[grid@acctdb01 conf] $more ons.config

Usesharedinstall=true

Allowgroup=true

Localport=6100 # line added by Agent

Remoteport=6200 # line added by Agent

Nodes=acctdb01:6200,acctdb02:6200 # line added by Agent

[grid@acctdb01 conf] $

Parameter description:

Localport: this parameter represents the local listening port. Here, it refers to the loopback address 127.0.0.1, which is used to communicate with clients running locally.

Remoteport: this parameter represents the remote listening port, that is, all local ip addresses except 127.0.0.1, which are used to communicate with remote clients.

Loglevel:oracle allows you to track the running of the ons process and log it to a local file. This parameter is used to define the log level to be recorded by the ons process, from 1 to 9. The default value is 3.

Logfile: this parameter is used in conjunction with the loglevel parameter to define the location of the ons process log file. The default value is $crs_home\ opmn\ logs\ opmn.log

Nodes and useocr: together, these two parameters determine which remote nodes the local ons daemon will communicate with the ons daemon on.

The format of nodes parameter values is as follows: hostname/ip: Port [hostname / ip:port]

Such as: useoce=off

Nodes=rac1:6200,rac2:6200

The parameter value of useocr is on/off. If useocr is on, the information is stored in ocr. If it is off, the information is taken from the configuration in nodes. For a single instance, set useocr to off.

4.1.2.2 configuring ons

The configuration can be modified by compiling the configuration file of ons directly. If ocr is used, the configuration can be done through the racgons command, but it must be executed by the root user. If executed by the oracle user, no error will be prompted, but no configuration will be changed.

To add a configuration, use the following command:

Racgons add_config rac1:6200 rac2:6200

To delete the configuration, use the following command:

Racgons remove_config rac1:6200 rac2:6200

4.1.2.3 onsctl command

Use the onsctl command to start, stop, debug ons, and reload the configuration file in the following format:

When the ons process runs, it does not necessarily mean that ons is working properly. You need to use the ping command to confirm it.

[grid@acctdb01 ~] $onsctl

ERROR!

Usage: onsctl [verbose] []

The verbose option enables debug tracing and logging (for the server start).

Permitted / combinations are:

Command options

--

Start-Start ons

Shutdown-Shutdown ons

Reload-Trigger ons to reread its configuration file

Debug [=..]-Display ons server debug information

Set [=..]-Set ons log parameters

Query [=]-Query ons log parameters

Ping []-Ping local ons

Help-Print brief usage description (this)

Usage []-Print detailed usage description

[grid@acctdb01 ~] $

1) View the process status at the os level.

[grid@dbrac2 bin] $ps-aef | grep ons

2) confirm the status of the ons service

[grid@dbrac2 bin] $onsctl ping

3) start the ons service

[grid@dbrac2 bin] $onsctl start

4) using the debug option, you can view the details, the most significant of which is the ability to display all connections.

[grid@dbrac1 ~] $onsctl debug

HTTP/1.1 200 OK

Content-Length: 2435

Content-Type: text/html

Response:

= = dbrac1:6200 9880 19:32:52 on 18-03-06 =

Home: / oracle/app/11.2.0/grid

= ONS =

IP ADDRESS PORT TIME SEQUENCE FLAGS

192.168.56.2 6200 5a9e7c23 00000002 00000008

Listener:

TYPE BIND ADDRESS PORT SOCKET

-

Local 0.0.0.1 6100 5

Local 127.0.0.1 6100 6

Remote any 6200 7

Remote any 6200-

Servers: (1)

INSTANCE NAME TIME SEQUENCE FLAGS DEFER

DbInstance_dbrac2_6200 5a9e739d 00000006 00000002 0

192.168.56.4 6200

Connection Topology: (2)

IP PORT VERS TIME

192.168.56.4 6200 4 5a9e739d

* * 192.168.56.2 6200

192.168.56.2 6200 4 5a9e7c23 =

* * 192.168.56.4 6200

Server connections:

ID CONNECTION ADDRESS PORT FLAGS SENDQ REF WSAQ

0 192.168.56.4 6200 010405 00000 001

Client connections:

ID CONNECTION ADDRESS PORT FLAGS SENDQ REF SUB W

1 internal 0 01008a 00000 001 002

4 0.0.0.1 6100 01001a 00000 001 000

Request 0.0.0.1 6100 03201a 00000 001 000

Worker Ticket: 18/18, Last: 18-03-06 19:32:47

THREAD FLAGS

--

Aa4f9700 00000012

Aa4b8700 00000012

Aa477700 00000012

Resources:

Notifications:

Received: Total 6 (Internal 2), in Receive Q: 0

Processed: Total 6, in Process Q: 0

Pool Counts:

Message: 1, Link: 1, Ack: 1, Match: 1

[grid@dbrac1 ~] $

4.1.3 srvctl

This command is the most commonly used and complex command in rac maintenance. This tool can manipulate the following resources: database,instance,asm,service,listener and node application, of which node application includes gsd,ons,vip. In addition to using srvctl tools to manage these resources, some resources also have their own independent management tools, such as ons can be managed using the onsctl command; listener can be managed through lsnrctl.

[grid@dbrac1 ~] $srvctl-help

Usage: srvctl []

Commands: enable | disable | start | stop | relocate | status | add | remove | modify | getenv | setenv | unsetenv | config | convert | upgrade

Objects: database | instance | service | nodeapps | vip | network | asm | diskgroup | listener | srvpool | scan | scan_listener | oc4j | home | filesystem | gns | cvu

For detailed help on each command and object and its options use:

Srvctl-h or

Srvctl-h

[grid@dbrac1 ~] $

4.1.3.1 use config to view configuration

1) View database configuration

Displays all databases registered in ocr without any parameters

[grid@dbrac1 ~] $srvctl config database

Dbrac

[grid@dbrac1 ~] $

-- use the-d option to view a database configuration

[grid@dbrac1 ~] $srvctl config database

Dbrac

[grid@dbrac1 ~] $

[grid@dbrac1] $srvctl config database-d dbrac

Database unique name: dbrac

Database name: dbrac

Oracle home: / oracle/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: + DATA/dbrac/spfiledbrac.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: dbrac

Database instances: dbrac1,dbrac2

Disk Groups: DATA,FRA

Mount point paths:

Services:

Type: RAC

Database is administrator managed

[grid@dbrac1 ~] $

Note: the output shows that the database raw consists of two nodes, and each instance name is handed over to rac1 and rac2. The $oracle_home of the two instances is / u01/app/oracle/product/10.2.0/db_1

-- use the-an option to view configuration details

[grid@dbrac1] $srvctl config database-d dbrac-a

Database unique name: dbrac

Database name: dbrac

Oracle home: / oracle/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: + DATA/dbrac/spfiledbrac.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: dbrac

Database instances: dbrac1,dbrac2

Disk Groups: DATA,FRA

Mount point paths:

Services:

Type: RAC

Database is enabled

Database is administrator managed

[grid@dbrac1 ~] $

2) View the configuration of node application

[grid@dbrac1] $srvctl config nodeapps-h

Displays the configuration information for the node applications.

Usage: srvctl config nodeapps [- a] [- g] [- s]

-a Display VIP configuration

-g Display GSD configuration

-s Display ONS daemon configuration

-h Print usage

[grid@dbrac1 ~] $

Returns node name, instance name and $oracle_home without any parameters

[grid@dbrac1 ~] $srvctl config nodeapps

Network exists: 1/192.168.56.0/255.255.255.0/eth0, type static

VIP exists: / dbrac1-vip/192.168.56.3/192.168.56.0/255.255.255.0/eth0, hosting node dbrac1

VIP exists: / dbrac2-vip/192.168.56.5/192.168.56.0/255.255.255.0/eth0, hosting node dbrac2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

[grid@dbrac1 ~] $

-- use the-an option to view vip configuration

[grid@dbrac1] $srvctl config nodeapps-a

Network exists: 1/192.168.56.0/255.255.255.0/eth0, type static

VIP exists: / dbrac1-vip/192.168.56.3/192.168.56.0/255.255.255.0/eth0, hosting node dbrac1

VIP exists: / dbrac2-vip/192.168.56.5/192.168.56.0/255.255.255.0/eth0, hosting node dbrac2

[grid@dbrac1 ~] $

-- use the-g option to view gsd:

[grid@dbrac1] $srvctl config nodeapps-g

GSD exists

[grid@dbrac1 ~] $

-- use the-s option to view ons:

[grid@dbrac1] $srvctl config nodeapps-s

ONS exists: Local port 6100, remote port 6200, EM port 2016

[grid@dbrac1 ~] $

3) check listener.

[grid@dbrac1 ~] $srvctl config listener-l LISTENER

Name: LISTENER

Network: 1, Owner: grid

Home:

End points: TCP:1521

[grid@dbrac1 ~] $

[grid@dbrac1] $srvctl config listener-l LISTENER-a

Name: LISTENER

Network: 1, Owner: grid

Home:

/ oracle/app/11.2.0/grid on node (s) dbrac2,dbrac1

End points: TCP:1521

[grid@dbrac1 ~] $

[grid@dbrac1 ~] $

4) View asm

[grid@dbrac1] $srvctl config asm-h

Displays the configuration for ASM.

Usage: srvctl config asm [- a]

-a Print detailed configuration information

-h Print usage

[grid@dbrac1 ~] $

[grid@dbrac1 ~] $srvctl config asm

ASM home: / oracle/app/11.2.0/grid

ASM listener: LISTENER

[grid@dbrac1 ~] $

[grid@dbrac1] $srvctl config asm-a

ASM home: / oracle/app/11.2.0/grid

ASM listener: LISTENER

ASM is enabled.

[grid@dbrac1 ~] $

5) View service

-- View all service configurations in the database

[grid@dbrac1] $srvctl config service-h

Displays the configuration for the service.

Usage: srvctl config service-d [- s] [- v]

-d Unique name for the database

-s Service name

-v Verbose output

-h Print usage

[grid@dbrac1 ~] $

4.1.3.2 add objects using add

In general, application layer resources are registered with ocr with the help of a graphical interface. For example, vip,ons is created in the final stage of installation, while database, asm is automatically registered with ocr during the execution of dbca, and listener is registered with ocr through the netca tool. But there are times when resources need to be manually registered with ocr. At this point, you need the add command.

1) add database

[grid@dbrac2 bin] $srvctl add database-d dbrac-o $ORACLE_HOME

2) add an instance

[grid@dbrac2 bin] $srvctl add instance-d dbrac-n dbrac1-I dbrac1

[grid@dbrac2 bin] $srvctl add instance-d dbrac-n dbrac2-I dbrac2

3) to add a service, 4 parameters are required

-s: service name

-r: preferred instance name

-a: alternate instance name

-P:taf policy. Available values are none (default), basic,preconnect.

[grid@dbrac1] $srvctl add service-d dbrac-s dmmservice-r dbrac1-a dbrac2-P basic

4) confirm that the addition is successful

[oracle@dbrac1] $srvctl add service-d dbrac-s dmmservice-r dbrac1-a dbrac2-P basic

[oracle@dbrac1 ~] $

-- View the service configuration:

[oracle@dbrac1] $srvctl config service-d dbrac-s dmmservice-v

Service name: dmmservice

Service is enabled

Server pool: dbrac_dmmservice

Cardinality: 1

Disconnect: false

Service role: PRIMARY

Management policy: AUTOMATIC

DTP transaction: false

AQ HA notifications: false

Failover type: NONE

Failover method: NONE

TAF failover retries: 0

TAF failover delay: 0

Connection Load Balancing Goal: LONG

Runtime Load Balancing Goal: NONE

TAF policy specification: BASIC

Edition:

Preferred instances: dbrac1

Available instances: dbrac2

[oracle@dbrac1 ~] $

[grid@dbrac1 ~] $crsctl status resource ora.dbrac.dmmservice.svc

NAME=ora.dbrac.dmmservice.svc

TYPE=ora.service.type

TARGET=OFFLINE

STATE=OFFLINE on dbrac1

[grid@dbrac1 ~] $

-- start the service:

[oracle@dbrac1] $srvctl start service-d dbrac-s dmmservice

-- check the service status again:

[grid@dbrac1 ~] $crsctl status resource ora.dbrac.dmmservice.svc

NAME=ora.dbrac.dmmservice.svc

TYPE=ora.service.type

TARGET=ONLINE

STATE=ONLINE on dbrac1

[grid@dbrac1 ~] $

-

[oracle@dbrac1] $srvctl add service-d dbrac-s abcde-r dbrac2-a dbrac1-P BASIC-y automatic-e SELECT-z 5-w 180

[oracle@dbrac1] $srvctl add service-d dbrac-s aaaa-r dbrac1,dbrac2-P BASIC-e SELECT-z 10-w 60

[oracle@dbrac1] $srvctl config service-d dbrac-s abcde-v

Service name: abcde

Service is enabled

Server pool: dbrac_abcde

Cardinality: 1

Disconnect: false

Service role: PRIMARY

Management policy: AUTOMATIC

DTP transaction: false

AQ HA notifications: false

Failover type: SELECT

Failover method: NONE

TAF failover retries: 5

TAF failover delay: 180

Connection Load Balancing Goal: LONG

Runtime Load Balancing Goal: NONE

TAF policy specification: BASIC

Edition:

Preferred instances: dbrac2

Available instances: dbrac1

[oracle@dbrac1 ~] $

[oracle@dbrac1] $srvctl config service-d dbrac-s aaaa-v

Service name: aaaa

Service is enabled

Server pool: dbrac_aaaa

Cardinality: 2

Disconnect: false

Service role: PRIMARY

Management policy: AUTOMATIC

DTP transaction: false

AQ HA notifications: false

Failover type: SELECT

Failover method: NONE

TAF failover retries: 10

TAF failover delay: 60

Connection Load Balancing Goal: LONG

Runtime Load Balancing Goal: NONE

TAF policy specification: BASIC

Edition:

Preferred instances: dbrac1,dbrac2

Available instances:

[oracle@dbrac1 ~] $

4.1.3.3 start with enable/disable and disable objects

By default, databases, instances, services, and asm all start themselves with the startup of crs, and sometimes this feature can be turned off for maintenance.

1) the configuration database starts automatically with the startup of crs

-- enable self-startup of the database:

[grid@dbrac2 bin] $srvctl enable database-d dbrac

-- View configuration

[oracle@dbrac1] $srvctl config database-d dbrac-a

Database unique name: dbrac

Database name: dbrac

Oracle home: / oracle/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: + DATA/dbrac/spfiledbrac.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: dbrac

Database instances: dbrac1,dbrac2

Disk Groups: DATA,FRA

Mount point paths:

Services: aaaa,abcde,dmmservice

Type: RAC

Database is enabled

Database is administrator managed

[oracle@dbrac1 ~] $

-- prohibit the database from starting automatically after crs startup. You need to start it manually.

[grid@dbrac2 bin] $srvctl disable database-d dbrac

2) disable the automatic startup of an instance

[grid@dbrac2 bin] $srvctl disable instance-d dbrac-I dbrac1

[grid@dbrac2 bin] $srvctl enable instance-d dbrac-I dbrac2

-- View information

[grid@dbrac2 bin] $srvctl config database-d dbrac-a

3) prohibit a service from running on the instance

[grid@dbrac2 bin] $srvctl enable service-d dbrac-s aaaa-I dbrac1

[grid@dbrac2 bin] $srvctl disable service-d dbrac-s aaaa-I dbrac2

-- View

[oracle@dbrac1] $srvctl config database-d dbrac-a

4.1.3.4 use remove to delete objects

Use the remove command to delete the definition information of the object in ocr, and the object itself, such as the data file of the database, will not be deleted, and can be re-added to the ocr at any time using the add command.

1) delete service. Before deleting, the command will give you a confirmation prompt.

[oracle@dbrac1] $srvctl remove service-d dbrac-s aaaa

2) Delete an instance. You will also be prompted before deleting it.

[grid@dbrac2 bin] $srvctl remove instance-d dbrac-I dbrac1

3) Delete the database

[grid@dbrac2 bin] $srvctl remove database-d dbrac

4.1.3.5 start, stop objects and view objects

Although you can still use the sql/plus method to start and shut down the database in the rac environment, it is more recommended to use the srvctl command to do this work, which ensures that even if the running information in crs is updated, you can use the start/stop command to start, stop the object, and then use the status command to check the object status.

1) start the database. It starts to open by default.

[grid@dbrac2 bin] $srvctl start database-d dbrac

2) specify the startup status

[grid@dbrac2 bin] $srvctl start database-d dbrac-I dbrac1-o mount

[grid@dbrac2 bin] $srvctl start database-d dbrac-I dbrac2-o nomount

3) close the object and specify the closing method

[grid@dbrac2 bin] $srvctl stop instance-d dbrac-I dbrac1-o immediate

[grid@dbrac2 bin] $srvctl stop instance-d dbrac-I dbrac2-o abort

4) start the service on the specified instance:

[grid@dbrac2 bin] $srvctl start service-d dbrac-s aaaa-I dbrac1

-- View service status

[grid@dbrac2 bin] $srvctl status service-d aaaa-v

5) disable the service on the specified instance

[grid@dbrac2 bin] $srvctl stop service-d dbrac-s aaaa-I dbrac1

-- View service status

[grid@dbrac2 bin] $srvctl status service-d dbrac-v

4.1.3.6 tracking srcctl

Tracking srvctl in oracle 10g is very simple, as long as you set the os environment variable srvm_trace=true, and all function calls to this command are output to the screen to help users diagnose.

[grid@dbrac2 bin] $export srvm_trace=true

[grid@dbrac2 bin] $srvctl config database-d dbrac

4.1.4 recovery

If both ocr disk and votedisk disk are destroyed and there is no backup, the easiest way to restore them is to restart ocr and votedisk. The specific operations are as follows:

1) stop the clusterware stack of all nodes

Crsctl stop crs

2) execute the $crs_home\ install\ rootdelete.sh script on each node with the root user

3) execute the $crs_home\ install\ rootinstall.sh script with the root user on any node

4) execute the $crs_home\ root.sh script with root on the same node as the previous step

5) execute the line $crs_home\ root.sh script with root on other nodes

6) reconfigure listening with netca command to confirm registration to clusterware

# crs_stat-t-v

So far, only listener,ons,gsd,vip has registered with ocr, and asm and databases need to be registered with ocr.

L add asm to ocr

# srvctl add asm-n rac1-I + asm1-o / u01/app/product/database

# srvctl add asm-n rac2-I + asm2-o / u01/app/product/database

L start asm

# srvctl start asm-n rac1

# srvctl start asm-n rac2

If there is an error in starting the Times ora-27550. Because rac cannot determine which Nic to use as private interconnect, the solution is to add the following parameters to the pfile files of the two asm:

+ asm1.cluster_interconnects='10.85.10.119'

+ asm2.cluster_interconnects='10.85.10.121'

L manually add database objects to ocr

# srvctl add database-d raw-o / u01/app/product/database

L add 2 instance objects

# srvctl add instance-d raw-I rac1-n rac1

# srvctl add instance-d raw-I rac2-n rac2

Modify the dependency between the instance and the asm instance

# srvctl modify instance-d raw-I rac1-s + asm1

# srvctl modify instance-d raw-I rac2-s + asm2

L start the database

# srvctl start database-d raw

If an ora-27550 error also occurs. It is also because rac is unable to determine which network card to use as private interconnect, so modifying the pfile parameter can be solved by restarting.

Sql > alter system set cluster_interconnects='10.85.10.119' scope=spfile sid='rac1'

Sql > alter system set cluster_interconnects='10.85.10.121' scope=spfile sid='rac2'

The above is all the contents of the article "there are several command sets of clusterware in oracle". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report