Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the installation method of Oracle 11G RAC cluster

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the relevant knowledge of "what is the installation method of Oracle 11G RAC cluster". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

The installation of Oracle 11G RAC cluster uses the following tools:

Database version: Oracle 11g

Grid version: grid 11g

System version: Centos 6.5

The preparatory work is as follows: first, to determine the network configuration of the host as a node, mainly to check whether it is two network cards and whether the device names of the two effective network cards of the node are the same.

If the device name of the network card is not the same as the name of the two nodes, there will be no error during installation, but there will be an error when installing oracle database software, and CRS will not work properly.

Therefore, if you find that the Nic name is inconsistent before installing grid, you should change it by:

For example, the name of the second Nic of the two nodes is not the same. The name of the node rac1 is eth2 and the node rac2 is eth3.

Now change the Nic name of rac2 from eth3 to eth2:

1. Stop the second network card of the node rac2: ifdown eth3

two。 Change the configuration file ifcfg-eth3 name of the second network card of rac2 to ifcfg-eth2,vim and change DEVICE=eth3 to DEVICE=eth2.

3. Change NAME= "eth3" in / etc/udev/rules.d/70-persistent-net.rules to NAME= "eth2"

For example:

[root@rac1 network-scripts] # cat / etc/udev/rules.d/70-persistent-net.rules# This file was automatically generated by the / lib/udev/write_net_rules# program, run by the persistent-net-generator.rules rules file.## You can modify it, as long as you keep each rule on a single# line, and change only the value of the NAME= key.# PCI device 0x8086:0x100f (e1000) SUBSYSTEM== "net", ACTION== "add", DRIVERS== "? *" ATTR {address} = = "00:0c:29:34:5b:13", ATTR {type} = = "1", KERNEL== "eth*", NAME= "eth0" # PCI device 0x8086:0x100f (e1000) SUBSYSTEM== "net", ACTION== "add", DRIVERS== "? *", ATTR {address} = = "00:0c:29:52:b8:54", ATTR {type} = = "1", KERNEL== "eth*", NAME= "eth2"

4. After the rac2 host is rebooted, the device name is changed to eth2, which is the same as the node rac1 Nic name.

Second, modify the hostnames of the two node machines to rac1 and rac2, and set the IP address in / etc/hosts: [root@rac1 ~] # cat / etc/hosts#public-ip public network ip172.16.140.146 rac1172.16.140.247 rac2#priv-ip private network ip186.18.6.222 rac1-priv186.18.6.186 rac2-priv#vip172.16.140.99 rac1-vip172.16.140.98 rac2-vip#scan-ip172.16.140.97 rac-scan

PS: Oracle

For RAC cluster installation, you need to configure that the network includes public network IP,VIP, private network IP and scan-ip, in which public network IP,VIP and private network IP are needed on both nodes, while scan-ip only needs one.

Both the public network IP and the vip can be accessed normally, and the private network IP is for the connection and use between the nodes of the cluster. As the name implies, the private network IP is for its own use, so there is no special requirement for setting the network segment and there is no conflict.

The scan-ip network segment can also be accessed normally. Displayed on the network card of the primary node after the cluster is installed or automatically configured to the master node. The specific functions of ip will be introduced later.

Now, in a nutshell, the Oracle of two nodes

The RAC cluster needs to apply for the IP addresses of five normal access network segments from the network engineer, and then configure the private network IP address. Both the public network IP address and the private network IP address can be seen in the system before installation. VIP and scan-ip are automatically configured to the ENI after the cluster installation is completed.

3.

(1) add users and groups to the node host (each node needs to be created):

[root@rac1 ~] # groupadd-g 1000 oinstall [root@rac1 ~] # groupadd-g 1200 asmadmin [root@rac1 ~] # groupadd-g 1201 asmdba [root@rac1 ~] # groupadd-g 1202 asmoper [root@rac1 ~] # groupadd-g 1300 dba [root@rac1 ~] # groupadd-g 1301 oper [root@rac1 ~] # useradd-m-u 1100-g oinstall-G asmadmin,asmdba,asmoper,dba-d / home/grid-s / bin/bash grid [root@rac1 ~] # useradd-m-u 1101-g oinstall-G dba,oper Asmdba-d / home/oracle-s / bin/bash oracle

(2) add user grid to the dba group:

[root@rac1 app] # gpasswd-a grid dbaAdding user grid to group dba

(3) modify the passwords of users grid and oracle (all nodes):

[root@rac1 ~] passwd oracle [root@rac1 ~] passwd grid

(4) determine the information of nobody users:

[root@rac1 ~] # id nobodyuid=99 (nobody) gid=99 (nobody) groups=99 (nobody)

PS: this user does not need to create it. If it does not exist, create it manually:

[root@rac1 ~] # / usr/sbin/useradd nobody

(5) disable firewall and SELNUX (all nodes)

[root@rac1 ~] service iptables status [root@rac1 ~] service iptables stop [root@rac1 ~] chkconfig iptables off [root@rac1 ~] chkconfig iptables-- list

(6) set the / etc/selinux/config file and set SELINUX to disabled.

[root@rac1 ~] # cat / etc/selinux/config

# This file controls the state of SELinux on the system.# SELINUX= can take one of these three values:# enforcing-SELinux security policy is enforced.# permissive-SELinux prints warnings instead of enforcing.# disabled-No SELinux policy is loaded.SELINUX=disabled# SELINUXTYPE= can take one of these two values:# targeted-Targeted processes are protected,# mls-Multi Level Security protection.SELINUXTYPE=targeted IV. Time synchronization of the cluster

PS: the environments I deal with are intranets and are not connected to the Internet, so time synchronization uses the time synchronization method of the cluster itself.

In 11gR2, when RAC is installed, time synchronization can be achieved in two ways:

NTP-time synchronization server of Linux system

CTSS-- time synchronization of the cluster itself

When the installer finds that the NTP protocol is inactive, installing the cluster time synchronization service will automatically install in active mode and pass the time of all nodes.

If NTP is found to be configured, the cluster time synchronization service is started in observer mode, and Oracle Clusterware does not synchronize the active time in the cluster.

The Oracle Cluster time synchronization Service (ctssd) is designed to provide services for groups whose Oracle RAC databases cannot access NTP services.

Here, CTSS.oracle is also recommended to use time synchronization within the cluster:

-configure CTSS

Using the cluster time synchronization service to provide synchronization services in the cluster requires uninstalling the Network time Protocol (NTP) and its configuration.

To deactivate the NTP service, you must stop the current ntpd service, disable it from the initialization sequence, and delete the ntp.conf file.

To complete these steps on Linux, run the following command on all Oracle RAC nodes as root:

/ sbin/service ntpd stopShutting down ntpd: [OK]

It's possible to close and fail, but it doesn't matter.

Chkconfig ntpd offmv / etc/ntp.conf / etc/ntp.conf.originalchkconfig ntpd-- listntpd 0:off 1:off 2:off 3:off 4:off 5:off 6:off

Also delete the following files:

Rm / var/run/ ntpd.pid V, shared disk configuration

(1) ASM storage is used in the cluster. For example, if the node mounts a bare device, check the device through fdisk-l:

PS: the following output is the disk configuration in the actual machine I installed. This kind of disk path is very convenient to operate, but this kind of disk mount path mostly appears on the virtual cloud server. If it is a physical machine, the mount path is different because the storage has made multi-link optimization. The configuration file processing of udev cannot be generated using the following method As for the physical machine multi-link storage mounted shared storage how to do UDEV, I will write an additional introduction.

[root@rac1] # fdisk-lDisk / dev/sda: 53.7 GB, 53687091200 bytes255 heads, 63 sectors/track, 6527 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x0004d5d5Device Boot Start End Blocks Id System/dev/sda1 * 1 26 204800 83 LinuxPartition 1 does not end on cylinder boundary./dev/sda2 26 548 4194304 82 Linux swap / SolarisPartition 2 does not end on cylinder boundary./dev/sda3 548 6528 48028672 83 LinuxDisk / dev/sdb: 1073 MB 1073741824 bytes34 heads, 61 sectors/track, 1011 cylindersUnits = cylinders of 2074 * 512 = 1061888 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 1073 MB, 1073741824 bytes34 heads, 61 sectors/track, 1011 cylindersUnits = cylinders of 2074 * 512 = 1061888 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk / dev/sdd: 1073 MB, 1073741824 bytes34 heads, 61 sectors/track 1011 cylindersUnits = cylinders of 2074 * 512 = 1061888 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk / dev/sde: 1073 MB, 1073741824 bytes34 heads, 61 sectors/track, 1011 cylindersUnits = cylinders of 2074 * 512 = 1061888 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk / dev/sdg: 6308 MB, 6308233216 bytes195 heads, 62 sectors/track 1019 cylindersUnits = cylinders of 12090 * 512 = 6190080 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk / dev/sdf: 6308 MB, 6308233216 bytes195 heads, 62 sectors/track, 1019 cylindersUnits = cylinders of 12090 * 512 = 6190080 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk / dev/sdh: 6308 MB, 6308233216 bytes195 heads, 62 sectors/track 1019 cylindersUnits = cylinders of 12090 * 512 = 6190080 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk / dev/sdi: 6308 MB, 6308233216 bytes195 heads, 62 sectors/track, 1019 cylindersUnits = cylinders of 12090 * 512 = 6190080 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk / dev/sdj: 6308 MB, 6308233216 bytes195 heads, 62 sectors/track 1019 cylindersUnits = cylinders of 12090 * 512 = 6190080 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000

PS:

Disk mount here a few wordy words: communicate with the system engineer before mounting the shared disk and divide the disk size in advance, mainly for installing grid's Vote disk. During installation, I chose to install grid into three 30g or so disks (vote disk group uses normal redundancy, three can be), and then other disks consider that disk io avoids hot spots in the future and try not to mount a whole disk.

In addition, the rac of 12c requires that the size of the vote disk group is larger than that of 11g. If you will consider upgrading the 12C RAC later, you can increase the capacity of the three fast disks used for the vote disk group to 50-100g each.

(2) check whether the disk numbers starting with the scsi of the mounted disks of all nodes are the same. The command is:

[root@rac1 ~] # ll / dev/disk/by-id [root@rac2 ~] # ll / dev/disk/by-id

Here, just use udev to map to a disk that grid can recognize.

(3) the number of the device name letter viewed in fdisk-l (except for the disk used to install the system, which is usually sda), execute the following script:

[root@rac1 ~] # for i in b c d e f g h i j > do > echo "KERNEL==\" sd*\ ", BUS==\" scsi\ ", PROGRAM==\" / sbin/scsi_id-- whitelisted-- replace-whitespace-- device=/dev/\ $name\ ", RESULT==\" `/ sbin/scsi_id-- whitelisted-- replace-whitespace-- device=/dev/sd$ i` ", NAME=\" asm-disk$i\ ", OWNER=\" grid\ ", GROUP=\" asmadmin\ ", MODE=\" 0660\ "> doneKERNEL==" sd* ", BUS==" scsi " PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c45525936676145692d66374e542d476c666e", NAME= "asm-diskb", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c45526a6d47665a522d6f7a39642d65674f47", NAME= "asm-diskc", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd*", BUS== "scsi" PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c45525a574f6573662d6a4c59642d63375933", NAME= "asm-diskd", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c4552324f6d38394d2d525835432d56415337", NAME= "asm-diske", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd*", BUS== "scsi" PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c45526d7636645a542d577636452d62375874", NAME= "asm-diskf", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c45527269467344372d644635642d32527033", NAME= "asm-diskg", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd*", BUS== "scsi" PROGRAM== "/ sbin/scsi_id-- whitelisted-- replace-whitespace-- device=/dev/$name", RESULT== "14f504e46494c4552735232776e502d674542432d75787338", NAME= "asm-diskh", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-- whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c455250456471356e2d534170302d65656262", NAME= "asm-diski", OWNER= "grid", GROUP= "asmadmin", MODE= "0660" KERNEL== "sd*", BUS== "scsi" PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14f504e46494c4552386f6a4e56632d4f6661442d32765a54", NAME= "asm-diskj", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

(4) enter the udev directory

[root@rac1 ~]] # cd / etc/udev/rules.d/ [root@rac1 ~] # vim 99-oracle.rules

Copy all the output to 99-oracle.rules, and make sure that the copy is exactly the same as the command output. This file generally does not exist and can be created manually.

Execute the command:

[root@rac1 ~] # start_udev is starting udev: [OK]

After successful execution of the command, view:

[root@rac1] # ll / dev/asm*brw-rw---- 1 grid asmadmin 8, 16 January 5 10:47 / dev/asm-diskbbrw-rw---- 1 grid asmadmin 8, 32 January 5 10:47 / dev/asm-diskcbrw-rw---- 1 grid asmadmin 8, 48 January 5 10:47 / dev/asm-diskdbrw-rw---- 1 grid asmadmin 8, 64 January 5 10:47 / dev/asm-diskebrw-rw---- 1 grid asmadmin 8 80 January 5 10:47 / dev/asm-diskfbrw-rw---- 1 grid asmadmin 8, 96 January 5 10:47 / dev/asm-diskgbrw-rw---- 1 grid asmadmin 8,112 January 5 10:47 / dev/asm-diskhbrw-rw---- 1 grid asmadmin 8,128 January 5 10:47 / dev/asm-diskibrw-rw---- 1 grid asmadmin 8,144 January 5 10:47 / dev/asm-diskj

PS:

The reason why the cluster cannot be identified after the shared disk is hung is mainly because the owner and group of the mounted disk are root,grid users do not have permission to access. Therefore, if you want grid and oracle to recognize these mounted disks, you must modify the disk permissions, but the mount mode of the shared disk will be remapped after the host restarts. Therefore, if you manually modify the permissions of the shared disk directly. After the host is restarted, the permissions will become root administrator permissions again. The way of udev is to directly write the mapping method of the disk after modifying the permissions to the rules (rules).

Another way to deal with this is by writing the command to modify permissions to the last file accessed after the host is rebooted, so that the system automatically executes every time the host system is rebooted. Access to the shared disk has been achieved. That is, bare devices use raw to process shared disks.

Using Linux's raw command

For example:

[root@rac1] cat / etc/rc.loca# Oracle Cluster OCRDG#chown grid:asmadmin / dev/mapper/mpathbchown grid:asmadmin / dev/mapper/mpathcchown grid:asmadmin / dev/mapper/mpathdchown grid:asmadmin / dev/mapper/mpathechown grid:asmadmin / dev/mapper/mpathfchmod / dev/mapper/mpathbchmod 660 / dev/mapper/mpathcchmod 660 / dev/mapper/mpathdchmod 660 / dev/mapper/mpathechmod 660 / dev/mapper/mpathfraw / dev/raw/raw1 / dev/mapper/mpathbraw / dev/raw/ Raw2 / dev/mapper/mpathcraw / dev/raw/raw3 / dev/mapper/mpathdraw / dev/raw/raw4 / dev/mapper/mpatheraw / dev/raw/raw5 / dev/mapper/mpathfsleep 2chown grid:asmadmin / dev/raw/raw1chown grid:asmadmin / dev/raw/raw2chown grid:asmadmin / dev/raw/raw3chown grid:asmadmin / dev/raw/raw4chown grid:asmadmin / dev/raw/raw5chmod 660 / dev/raw/raw1chmod 660 / dev/raw/raw2chmod 660 / dev/raw/raw3chmod 660 / dev/raw/raw4chmod 660 / dev/raw/raw5

This is the old way of dealing with shared disks, but only in 11g and previous versions, while version 12c only supports udev. Bare device raw processing is no longer supported.

(5) after the execution on the above operation node 1, pass the created 99-oracle.rules to node 2, then node 2 executes start_udev, and query ll / dev/asm* to get the same result as node 1 after opening.

Create a directory structure (all nodes)

1. Execute on node 1 and node 2, respectively:

[root@rac1 ~] mkdir-p/oracle/ app/grid/11.2.0.4 [root@rac1 ~] mkdir-p/oracle/ grid [root@rac1 ~] chown-R grid:oinstall / oracle [root@rac1 ~] mkdir-p/oracle/ app/oracle/11.2.0.4/db_1 [root@rac1 ~] chown-R oracle:oinstall / oracle/app/oracle [root@rac1 ~] chmod-R 775 / oracle [root@rac2 ~] mkdir-p/oracle/ app / grid/11.2.0.4 [root@rac2 ~] mkdir-p/oracle/ grid [root@rac2 ~] chown-R grid:oinstall / oracle [root@rac2 ~] mkdir-p/oracle/ app/oracle/11.2.0.4/db_1 [root@rac2 ~] chown-R oracle:oinstall / oracle/app/oracle [root@rac2 ~] chmod-R 775 / oracle

two。 Configure environment variables

Grid user

Modify the. Bash_profile of the grid user. Notice the different content of each node:

Node 1:

[root@rac1 ~] # su-grid [grid@rac1 ~] $vim .bash _ profileexport ORACLE_SID=+ASM1export ORACLE_BASE=/oracle/gridexport ORACLE_HOME=/oracle/app/grid/11.2.0.4export PATH=$ORACLE_HOME/bin:$PATH

Node 2:

[root@rac2 ~] # su-grid [grid@rac2 ~] $vim .bash _ profileexport ORACLE_SID=+ASM2export ORACLE_BASE=/oracle/gridexport ORACLE_HOME=/oracle/app/grid/11.2.0.4export PATH=$ORACLE_HOME/bin:$PATH

(the configuration information of PATH needs to be output by entering echo $PATH on the command line under the grid user.)

Oracle user

Node 1:

[root@rac1 ~] # su-oracle [grid@rac1 ~] $vim .bash _ profileexport ORACLE_SID=student1export ORACLE_BASE=/oracle/app/oracleexport ORACLE_HOME=$ORACLE_BASE/11.2.0.4/db_1export PATH=$ORACLE_HOME/bin:$PATH

Node 2:

[root@rac2 ~] # su-oracle [grid@rac2 ~] $vim .bash _ profileexport ORACLE_SID=student2export ORACLE_BASE=/oracle/app/oracleexport ORACLE_HOME=$ORACLE_BASE/11.2.0.4/db_1export PATH=$ORACLE_HOME/bin:$PATH 7. Set resource limits for installation users, both nodes do

To improve software performance on Linux systems, the following resource limits must be added to the Oracle software owner users (grid, oracle):

Shell restrictions hard limits on entries in limits.conf

Open the maximum number of file descriptors nofile 65536

The maximum number of processes available to a single user nproc 16384

Maximum size of the process stack segment stack 10240

As root, on each Oracle RAC node, add in the / etc/security/limits.conf file

As follows, or execute the following command: (copy the whole to the command line)

Cat > > / etc/security/limits.conf / etc/pam.d/login / etc/profile

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report