Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

11.2.0.4oracle Database rac installation

2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

RAC Building Manual

RAC Building Manual. 1

I. Environmental introduction. 1

2. Both hostname and hosts files need to be set up. 1

3. Turn off the firewall, NTP service and selinux. two

Four. create the necessary users, groups, and directories and authorize. 2

five。 Node configuration check. 3

VI. Parameter file modification. 3

Seven. install the required dependency package. 4

Eight. add a soft connection. 4

IX. Configure grid and oracle user environment variables. 5

Ten. Configure grid user trust. 5

eleven。 Configure a bare disk. 7

XII. Grid installation. 8

XIII. ASM disk group creation. 27

fourteen。 Install oracle software. thirty-two

I. Environmental introduction

Scan-ip

172.20.0.174

Rac1-public ip

172.20.0.25

Rac2-public ip

172.20.0.26

Rac1-vip

172.20.0.186

Rac2-vip

172.20.0.189

Rac1-private ip

192.168.2.112

Rac2-private ip

192.168.2.107

2. Both hostname and hosts files need to be configured

172.20.0.25 rac1:

[root@localhost ~] # vi / etc/sysconfig/network

NETWORKING=yes

HOSTNAME=rac1

172.20.0.26 rac2

[root@localhost ~] # vi / etc/sysconfig/network

NETWORKING=yes

HOSTNAME=rac2

[root@localhost ~] # vi / etc/hosts

172.20.0.25 rac1

172.20.0.186 rac1-vip

192.168.2.112 rac1-priv

172.20.0.26 rac2

172.20.0.189 rac2-vip

192.168.2.107 rac2-priv

172.20.0.174 scan-ip

3. Turn off firewall, NTP service and selinux.

Both rac1 and rac2 need to be shut down

[root@rac1 ~] # vi / etc/sysconfig/selinux

SELINUX=disabled

Restart is needed here to take effect.

[root@rac1 ~] # getenforce

Disabled

[root@rac1 ~] # / etc/init.d/iptables stop

[root@rac1 ~] # chkconfig iptables off

[root@rac1 ~] # chkconfig ntpd off

Four. create the necessary users, groups, and directories and authorize

Both nodes rac1 and rac2 need to be created, and only the operations above rac1 are demonstrated here.

[root@rac1] # / usr/sbin/groupadd-g 1000 oinstall

[root@rac1] # / usr/sbin/groupadd-g 1020 asmadmin

[root@rac1] # / usr/sbin/groupadd-g 1021 asmdba

[root@rac1] # / usr/sbin/groupadd-g 1022 asmoper

[root@rac1] # / usr/sbin/groupadd-g 1031 dba

[root@rac1] # / usr/sbin/groupadd-g 1032 oper

[root@rac1] # useradd-d / opt/grid-u 1100-g oinstall-G asmadmin,asmdba,asmoper,oper,dba grid

[root@rac1] # useradd-d / opt/oracle-u 1101-g oinstall-G dba,asmdba,oper oracle

[root@rac1 ~] # passwd oracle

Changing password for user oracle.

New password:

BAD PASSWORD: it is based ona dictionary word

BAD PASSWORD: is too simple

Retype new password:

Passwd: all authentication tokens updated successfully.

[root@rac1 ~] # passwd grid

Changing password for user grid.

New password:

BAD PASSWORD: it is too short

BAD PASSWORD: is too simple

Retype new password:

Passwd: all authentication tokens updated successfully.

[root@rac1] # mkdir-p / u01/app/11.2.0/grid

[root@rac1] # mkdir-p / u01/app/grid

[root@rac1 ~] # mkdir / u01/app/oracle

[root@rac1] # chown-R grid:oinstall / U01

[root@rac1 ~] # chown oracle:oinstall / u01/app/oracle

[root@rac1] # chmod-R 775 / u01 /

[root@rac1 ~] #

five。 Node configuration check

View memory and swap size, memory size: at least 2.5GB

Root@rac1 ~] # grep MemTotal / proc/meminfo

MemTotal: 8061904 kB

[root@rac1 ~] # grep SwapTotal / proc/meminfo

SwapTotal: 3145720 kB

[root@rac1 ~] #

VI. Parameter file modification

(1) Kernel parameter settings:

[root@rac1 ~] # vi / etc/sysctl.conf

# Controls the maximum shared segment size, in bytes

Kernel.shmmax = 68719476736 (existing) physical memory 1x2 (M) * 1024mm 1024

# Controls the maximum number of shared memory segments, in pages

Kernel.shmall = 4294967296 (existing)

# oracle setting

Fs.aio-max-nr = 1048576

Fs.file-max = 6815744

Kernel.shmmni = 4096

Kernel.sem = 250 32000 100 128

Net.ipv4.ip_local_port_range = 9000 65500

Net.core.rmem_default = 262144

Net.core.rmem_max = 4194304

Net.core.wmem_default = 262144

Net.core.wmem_max = 1048576

# / sbin/sysctl-p execute the command to make the changes take effect

[root@rac1] # / sbin/sysctl-p

(2) configure shell restrictions for oracle and grid users

[root@rac1 ~] # vi / etc/security/limits.conf

Oracle soft nproc 2047

Oracle hard nproc 16384

Oracle soft nofile 1024

Oracle hard nofile 65536

Oracle soft stack 10240

Grid soft nproc 2047

Grid hard nproc 16384

Grid soft nofile 1024

Grid hard nofile 65536

(3) configure login

[root@rac1 ~] # vi / etc/pam.d/login

Session required pam_limits.so

Seven. install the required dependency packages

Yum-y install binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel gcc gcc-c++

Yum-y install glibc glibc-common glibc-devel glibc-headers ksh libaio libaio-devel

Yum-y install libgcc libstdc++ libstdc++-devel make numactl-devel sysstat unixODBC unixODBC-devel

Yum install libcap

Eight. add soft connection

[root@rac1 ~] # cd / lib64

Ln-s libcap.so.2 libcap.so.1

IX. Configure grid and oracle user environment variables

Oracle_sid needs to be modified according to different nodes.

[root@rac1 ~] # su-grid

[grid@rac1 ~] $vi .bash _ profile

Export TMP=/tmp

Export TMPDIR=$TMP

Export ORACLE_SID=+ASM1 # RAC1

Export ORACLE_SID=+ASM2 # RAC2

Export ORACLE_BASE=/u01/app/grid

Export ORACLE_HOME=/u01/app/11.2.0/grid

Export PATH=/usr/sbin:$PATH

Export PATH=$ORACLE_HOME/bin:$PATH

Export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

Export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Umask 022

It should be noted that ORACLE_UNQNAME is the database name. Specifying multiple nodes when creating a database will create multiple instances. ORACLE_SID refers to the database instance name.

[root@rac1 ~] # su-oracle

[oracle@rac1 ~] $vi .bash _ profile

Export TMP=/tmp

Export TMPDIR=$TMP

Export ORACLE_SID=orcl1 # RAC1

Export ORACLE_SID=orcl2 # RAC2

Export ORACLE_UNQNAME=orcl

Export ORACLE_BASE=/u01/app/oracle

Export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

Export TNS_ADMIN=$ORACLE_HOME/network/admin

Export PATH=/usr/sbin:$PATH

Export PATH=$ORACLE_HOME/bin:$PATH

Export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

$source .bash _ profile to make the configuration file effective

10. Configure grid user mutual trust manually

The configuration process is as follows:

Each node generates Keys:

The configuration process is as follows:

Each node generates Keys:

[root@rac1 ~] # su-grid

[grid@rac1] $mkdir ~ / .ssh

[grid@rac1] $chmod 700 ~ / .ssh

[grid@rac1] $ssh-keygen-t rsa

[grid@rac1] $ssh-keygen-t dsa

[root@rac2 ~] # su-grid

[grid@rac2] $mkdir ~ / .ssh

[grid@rac2] $chmod 700 ~ / .ssh

[grid@rac2] $ssh-keygen-t rsa

[grid@rac2] $ssh-keygen-t dsa

Configure mutual trust on node 1:

[grid@rac1] $touch ~ / .ssh/authorized_keys

[grid@rac1] $cd ~ / .ssh

[grid@rac1 .ssh] $ssh rac1 cat ~ / .ssh/id_rsa.pub > > authorized_keys

[grid@rac1 .ssh] $ssh rac2 cat ~ / .ssh/id_rsa.pub > > authorized_keys

[grid@rac1 .ssh] $ssh rac1 cat ~ / .ssh/id_dsa.pub > > authorized_keys

[grid@rac1 .ssh] $ssh rac2 cat ~ / .ssh/id_dsa.pub > > authorized_keys

Transfer the verification file that stores the public key information to the rac2 on rac1

[grid@rac1 .ssh] $pwd

/ home/grid/.ssh

[grid@rac1 .ssh] $scp authorized_keys rac2:'pwd'

Grid@rac2's password:

Authorized_keys 100% 1644 1.6KB/s 00:00

Set permissions for authentication files

Execute at each node:

$chmod 600 ~ / .ssh/authorized_keys

Enable user consistency

Run as the grid user on the node where you want to run OUI (select rac1 here):

[grid@rac1 .ssh] $exec / usr/bin/ssh-agent $SHELL

[grid@rac1 .ssh] $ssh-add

Identity added: / home/grid/.ssh/id_rsa (/ home/grid/.ssh/id_rsa)

Identity added: / home/grid/.ssh/id_dsa (/ home/grid/.ssh/id_dsa)

Verify that the ssh configuration is correct

Execute separately on all nodes as the grid user:

Ssh rac1 date

Ssh rac2 date

Ssh rac1-priv date

Ssh rac2-priv date

If you do not need to enter a password to output the time, ssh authentication configuration is successful. The above commands must be run on both nodes, and each command needs to be typed yes the first time it is executed.

If you do not run these commands, an error will occur when installing clusterware, even if ssh verification is configured:

The specified nodes are not clusterable

Because, after configuring ssh, you also need to enter yes on the first access before you can really access other servers without barriers.

It should be noted that the password is not set when the key is generated, the permission of the authorization file is 600, and the two nodes need to ssh each other once.

eleven。 Configure bare disk

Using asm to manage storage requires a bare disk, which was previously configured with a shared hard disk on both hosts. There are two ways to configure a bare disk: (1) add oracleasm

(2) add / etc/udev/rules.d/60-raw.rules configuration file (character mode helps bind udev)

Adopt the second kind

Fdisk / dev/sdb

Command (m forhelp): n

Command action

E extended

P primary partition (1-4)

P

Partition number (1-4): 1

…… .

Here I am divided into three areas: OCR (voting disk), DATA (data file) and FAR (quick recovery area).

Finally, the w command saves the changes

Partx-a / dev/sdb

Create a bare device map on both nodes (both nodes need to be edited) (both nodes need to be edited)

[root@rac1 rules.d] # vi / etc/udev/rules.d/60-raw.rules

# Enter raw device bindings here.

#

# An example would be:

# ACTION== "add", KERNEL== "sda", RUN+= "/ bin/raw / dev/raw/raw1% N"

# to bind / dev/raw/raw1 to / dev/sda, or

# ACTION== "add", ENV {MAJOR} = = "8", ENV {MINOR} = = "1", RUN+= "/ bin/raw / dev/raw/raw2% M% m"

# to bind / dev/raw/raw2 to the device with major 8, minor 1.

ACTION== "add", KERNEL== "sdb1", RUN+= "/ bin/raw / dev/raw/raw1 N"

ACTION== "add", KERNEL== "sdb2", RUN+= "/ bin/raw / dev/raw/raw2 N"

ACTION== "add", KERNEL== "sdb3", RUN+= "/ bin/raw / dev/raw/raw3 N"

KERNEL== "raw [1-3]", OWNER= "grid", GROUP= "asmadmin" MODE= "660"

Start and view the bare disk

[root@rac1 ~] # start_udev

Starting udev: [OK]

[root@rac1 ~] # ll / dev/raw/

Total 0

Crw-rw---- 1 grid asmadmin 162, 1 Apr 13 13:51 raw1

Crw-rw---- 1 grid asmadmin 162, 2 Apr 13 13:51 raw2

Crw-rw---- 1 grid asmadmin 162, 3 Apr 13 13:51 raw3

Crw-rw---- 1 root disk 162, 0 Apr 13 13:51 rawctl

[root@rac1 rules.d] # raw-qa

/ dev/raw/raw1: bound to major 8, minor 17

/ dev/raw/raw2: bound to major 8, minor 18

/ dev/raw/raw3: bound to major 8, minor 19

Check the master-slave device number of the bare device

[root@rac1 rules.d] # ls-l / dev/raw/

Total 0

Crw-rw---- 1 grid asmadmin 162, 1 Aug 5 12:44 raw1

Crw-rw---- 1 grid asmadmin 162, 2 Aug 5 12:44 raw2

Crw-rw---- 1 grid asmadmin 162, 3 Aug 5 12:44 raw3

Node 2 force refresh

[root@rac2 rules.d] # partprobe

[root@rac2 ~] # start_udev

Starting udev: [OK]

[root@rac1 rules.d] # raw-qa

/ dev/raw/raw1: bound to major 8, minor 17

/ dev/raw/raw2: bound to major 8, minor 18

/ dev/raw/raw3: bound to major 8, minor 19

Check the master-slave device number of the bare device

[root@rac1 rules.d] # ls-l / dev/raw/

Total 0

Crw-rw---- 1 grid asmadmin 162, 1 Aug 5 12:44 raw1

Crw-rw---- 1 grid asmadmin 162, 2 Aug 5 12:44 raw2

Crw-rw---- 1 grid asmadmin 162, 3 Aug 5 12:44 raw3

XII. Grid installation

Rac1 to the grid software directory and execute the runcluvfy.sh command to start the pre-installation check:

Extract the grid under grid and perform a pre-installation check.

[grid@rac1 ~] $cd grid/

[grid@rac1 grid] $ls

Install readme.html response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html

[grid@rac1 grid] $pwd

/ opt/grid/grid

[grid@rac1 grid] $. / runcluvfy.sh stage-pre crsinst-n rac1,rac2-fixup-verbose

Check the cvu report, correct the error, check all the items and start to install grid.

Log in to the graphical interface

[root@rac1 ~] # xhost +

[root@rac1 ~] # su-grid

[grid@rac1 ~] $cd grid

[grid@rac1 grid] $. / runInstaller

Trust has been done manually, so you can skip it here and go straight to the next step.

Execute the two scripts on both nodes

Pay attention to the order when executing the script. You must execute rac1 first, and then execute it on rac2.

Click ok after execution

Rac1:

[root@rac1 ~] # cd / u01/app/oraInventory/

[root@rac1 oraInventory] #. / orainstRoot.sh

Changing permissions of / u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of / u01/app/oraInventory to oinstall.

The execution of the script is complete.

[root@rac1 oraInventory] # clear

[root@rac1 oraInventory] # cd / u01/app/11.2.0/grid/

[root@rac1 grid] #. / root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= / u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/ usr/local/bin]:

Copying dbhome to / usr/local/bin...

Copying oraenv to / usr/local/bin...

Copying coraenv to / usr/local/bin...

Creating / etc/oratab file...

Entries will be added to the / etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: / u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization-successful

Root wallet

Root wallet cert

Root cert export

Peer wallet

Profile reader wallet

Pa wallet

Peer wallet keys

Pa wallet keys

Peer cert request

Pa cert request

Peer cert

Pa cert

Peer root cert TP

Profile reader root cert TP

Pa root cert TP

Peer pa cert TP

Pa peer cert TP

Profile reader pa cert TP

Profile reader peer cert TP

Peer user cert

Pa user cert

Adding Clusterware entries to upstart

CRS-2672: Attempting to start 'ora.mdnsd' on' rac1'

CRS-2676: Start of 'ora.mdnsd' on' rac1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on' rac1'

CRS-2676: Start of 'ora.gpnpd' on' rac1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on' rac1'

CRS-2672: Attempting to start 'ora.gipcd' on' rac1'

CRS-2676: Start of 'ora.cssdmonitor' on' rac1' succeeded

CRS-2676: Start of 'ora.gipcd' on' rac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on' rac1'

CRS-2672: Attempting to start 'ora.diskmon' on' rac1'

CRS-2676: Start of 'ora.diskmon' on' rac1' succeeded

CRS-2676: Start of 'ora.cssd' on' rac1' succeeded

ASM created and started successfully.

Disk Group OCR created successfully.

Clscfg:-install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp' root'..

Operation successful.

CRS-4256: Updating the profile

Successful addition of voting disk 9ee58ddd21094f61bf43065b4875e9a4.

Successfully replaced voting disk group with + OCR.

CRS-4256: Updating the profile

CRS-4266: Voting file (s) successfully replaced

# # STATE File Universal Id File Name Disk group

1. ONLINE 9ee58ddd21094f61bf43065b4875e9a4 (/ dev/raw/raw1) [OCR]

Located 1 voting disk (s).

CRS-2672: Attempting to start 'ora.asm' on' rac1'

CRS-2676: Start of 'ora.asm' on' rac1' succeeded

CRS-2672: Attempting to start 'ora.OCR.dg' on' rac1'

CRS-2676: Start of 'ora.OCR.dg' on' rac1' succeeded

Configure Oracle Grid Infrastructure for a Cluster... Succeeded

Rac2

[root@rac2 CVU_11.2.0.4.0_grid] # cd / u01/app/oraInventory/

ContentsXML/ logs/ oraInst.loc orainstRoot.sh

[root@rac2 CVU_11.2.0.4.0_grid] # cd / u01/app/oraInventory/

[root@rac2 oraInventory] #. / orainstRoot.sh

Changing permissions of / u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of / u01/app/oraInventory to oinstall.

The execution of the script is complete.

[root@rac2 oraInventory] # cd.. / 11.2.0/grid/

[root@rac2 grid] #. / r

Racg/ rdbms/ relnotes/ root.sh rootupgrade.sh

[root@rac2 grid] #. / root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= / u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/ usr/local/bin]:

Copying dbhome to / usr/local/bin...

Copying oraenv to / usr/local/bin...

Copying coraenv to / usr/local/bin...

Creating / etc/oratab file...

Entries will be added to the / etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: / u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization-successful

Adding Clusterware entries to upstart

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

Configure Oracle Grid Infrastructure for a Cluster... Succeeded

Click ok to continue the installation

XIII. ASM disk group creation

[root@rac1 ~] # su-grid

[grid@rac1 ~] $asmca

fourteen。 Install oracle software

[rooot@rac1 ~] # xhost +

[root@rac1 ~] # su-oracle

[oracle@rac1 ~] $cd database

[oracle@rac1] $. / runInstaller

Rac1

[root@rac1 ~] # cd / u01/app/oracle/product/11.2.0/db_1/

[root@rac1 db_1] #. / root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= / u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/ usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the / etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@rac1 db_1] #

Rac2

[root@rac2 grid] # cd / u01/app/oracle/product/11.2.0/db_1/

[root@rac2 db_1] #. / root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= / u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/ usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the / etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@rac2 db_1] #

XIV. Create an instance by DBCA

Extract the installation package under oracle

Log in to the graphical interface

[root@rac1 ~] # xhost +

[root@rac1 ~] # su-oracle

[oracle@rac1 ~] $dbca

The installation is complete.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report