Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How does the daily operation and maintenance of RAC check whether the server iptables is shut down?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "RAC daily operation and maintenance how to check whether the server iptables is closed". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "RAC daily operation and maintenance how to check whether the server iptables is closed".

Check to see if the server iptables is turned off (for each node)

[root@its032 ~] # chkconfig-- list | grep tables

Ip6tables 0:off 1:off 2:off 3:off 4:off 5:off 6:off

Iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off

View asm shared disk information (for each node)

[root@its032 ~] # oracleasm listdisks

ARCH1

DATA1

DATA2

DATA3

OCRVOTE

List all configured databases

[root@node1 ~] # srvctl config database

NOVADB

Status of all instances and services

[root@node1] # srvctl status database-d NOVADB

Instance NOVADB1 is running on node node1

Instance NOVADB2 is running on node node2

Status of a single instance

[root@node1] # srvctl status instance-d NOVADB-I NOVADB1

Instance NOVADB1 is running on node node1

Status of the global naming service in the database

$srvctl status service-d orcl-s orcltest

Service orcltest is running on instance (s) orcl2, orcl1

The status of the node application on a specific node

[root@node1] # srvctl status nodeapps-n node1

VIP is running on node: node1

GSD is running on node: node1

Listener is running on node: node1

ONS daemon is running on node: node1

Status of ASM instance

[root@node1] # srvctl status asm-n node1

ASM instance + ASM1 is running on node node1.

[root@node1] # srvctl status asm-n node2

ASM instance + ASM2 is running on node node2.

Displays the configuration of the RAC database

[root@node1] # srvctl config database-d NOVADB

Node1 NOVADB1 / opt/ora10g/product/10.2.0/db_1

Node2 NOVADB2 / opt/ora10g/product/10.2.0/db_1

Displays all services for the specified cluster database

[root@node1] # srvctl config service-d NOVADB

NOVADB PREF: NOVADB1 NOVADB2 AVAIL:

Displays the configuration of the node application-(VIP, GSD, ONS, listener)

[root@node1] # srvctl config nodeapps-n node1-a-g-s-l

VIP exists.: / node1-vip/192.168.150.224/255.255.255.0/eth0

GSD exists.

ONS daemon exists.

Listener exists.

Using the srvctl config command, you can see the configuration information of the existing database:

[root@node1] # srvctl config database-d NOVADB-a

Node1 NOVADB1 / opt/ora10g/product/10.2.0/db_1

Node2 NOVADB2 / opt/ora10g/product/10.2.0/db_1

DB_NAME: NOVADB

ORACLE_HOME: / opt/ora10g/product/10.2.0/db_1

SPFILE: + RAC_DISK/NOVADB/spfileNOVADB.ora

DOMAIN: null

DB_ROLE: null

START_OPTIONS: null

POLICY: AUTOMATIC

ENABLE FLAG: DB ENABLED

Displays the configuration of the ASM instance

[root@node1] # srvctl config asm-n node1

+ ASM1 / opt/ora10g/product/10.2.0/db_1

[root@node1] # srvctl config asm-n node2

+ ASM2 / opt/ora10g/product/10.2.0/db_1

All running instances in the cluster

SQL >

SELECT inst_id, instance_number inst_no, instance_name inst_name, parallel, status, database_status db_status, active_state state, host_name host FROM gv$instance

ORDER BY inst_id

INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE

1 1 NOVADB1 YES OPEN ACTIVE NORMAL

2 2 NOVADB2 YES OPEN ACTIVE NORMAL

SQL >

All data files located in the disk group

SQL > select name from v$datafile

Union

Select member from v$logfile

Union

Select name from v$controlfile

Union

Select name from v$tempfile

NAME

-

+ RAC_DISK/novadb/controlfile/current.260.685491565

+ RAC_DISK/novadb/datafile/nova_test.268.686337643

+ RAC_DISK/novadb/datafile/sysaux.257.685491407

+ RAC_DISK/novadb/datafile/system.256.685491401

+ RAC_DISK/novadb/datafile/undotbs1.258.685491411

+ RAC_DISK/novadb/datafile/undotbs2.264.685491733

+ RAC_DISK/novadb/datafile/users.259.685491413

+ RAC_DISK/novadb/onlinelog/group_1.261.685491571

+ RAC_DISK/novadb/onlinelog/group_2.262.685491575

+ RAC_DISK/novadb/onlinelog/group_3.265.685491915

+ RAC_DISK/novadb/onlinelog/group_4.266.685491921

NAME

-

+ RAC_DISK/novadb/tempfile/temp.263.685491617

12 rows selected.

SQL >

All ASM disks belonging to the "RAC_DISK" disk group

SQL > SELECT path FROM v$asm_disk WHERE group_number IN (select group_number from v$asm_diskgroup where name = 'RAC_DISK')

PATH

-

/ dev/raw/raw3

/ dev/raw/raw4

ORCL:NOVA3

Start / stop RAC cluster

Make sure you are logged in as the oracle UNIX user. We will run all commands from the rac1 node:

# su-oracle

[oracle@node1 ~] $hostname

Node1

Stop the Oracle RAC 10g environment

The first step is to stop the Oracle instance. When this instance (and related services) is shut down, shut down the ASM instance. Finally, close the node applications (virtual IP, GSD, TNS listeners, and ONS).

[oracle@node1 ~] $export ORACLE_SID=NOVADB1

[oracle@node1 ~] $emctl stop dbconsole

[oracle@node1] $srvctl stop instance-d NOVADB-I NOVADB1

[oracle@node1] $srvctl stop asm-n node1

[oracle@node1] $srvctl stop nodeapps-n node1

Start the Oracle RAC 10g environment

The first step is to start the node application (virtual IP, GSD, TNS listener, and ONS). When the node application is started successfully, start the ASM instance. Finally, start the Oracle instance (and related services) and the Enterprise Manager database console.

[oracle@node1 ~] $export ORACLE_SID=NOVADB

[oracle@node1] $srvctl start nodeapps-n node1

[oracle@node1] $srvctl start asm-n node1

[oracle@node1] $srvctl start instance-d NOVADB-I NOVADB1

[oracle@node1 ~] $emctl start dbconsole

Start / stop all instances using SRVCTL

Start / stop all instances and their enabled services. I just thought it was interesting to add this step as a way to shut down all instances!

[oracle@node1] $srvctl start database-d NOVADB

[oracle@node1] $srvctl stop database-d NOVADB

Start and stop monitoring

[oracle@node1 ~] $lsnrctl start listener_hostb

[oracle@node1 ~] $lsnrctl stop listener_hostb

The following has not been tested, waiting for the test, the work is too busy..

Backup

Votning diskdd if=voting_disk_name of=backup_file_name

Dd if=/dev/rdsk/c4t600C0FF000000000098ADE240330A000d0s4 of=votingdisk.bak# dd if=/dev/zero of=/dev/rdsk/c4t600C0FF000000000098ADE240330A000d0s4 bs=512 count=261120

test

# dd if=/dev/rdsk/c4t600C0FF000000000098ADE240330A000d0s4 of=/data/backup/rac/vd_backup0420.bak

261120000 record entry

261120: 0 record call-out

# cd / data/backup/rac

# ls

Ocr0420.bak ocrdisk vd_backup0420.bak votingdisk.bak votingdisk0420.bak

# dd if=/data/backup/rac/vd_backup0420.bak of=/dev/rdsk/c4t600C0FF000000000098ADE240330A000d0s4

261120000 record entry

261120: 0 record call up backup OCR disk

View backup

$ocrconfig-showbackup

Backup

/ data/oracle/crs/bin/ocrconfig-export / data/backup/rac/ocrdisk.bak

Restore requires stopping all nodes, Stop the Oracle Clusterware software on all of the nodes

/ data/oracle/crs/bin/ocrconfig-import file_name

Restore of automatic backup

# / data/oracle/crs/bin/ocrconfig-showbackup

# / data/oracle/crs/bin/ocrconfig-restore / data/oracle/crs/cdata/db168crs/backup00.ocrhosta$cluvfy comp ocr-n all / / Verification

Ocr check

# ocrcheck configuration path is located in

If you need to change the path configuration of the OCR disk in the / var/opt/oracle/ocrconfig_loc file.

OCR disk space check

# / data/oracle/crs/bin/ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 2

Total space (kbytes): 399752

Used space (kbytes): 3784

Available space (kbytes): 395968

ID: 148562961

Device/File Name: / dev/rdsk/c4t600C0FF000000000098ADE240330A000d0s5

Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded#

Thank you for your reading. The above is the content of "how the daily operation and maintenance of RAC check whether the server iptables is off". After the study of this article, I believe you have a deeper understanding of how the daily operation and maintenance of RAC check whether the server iptables is shut down, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report