Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Oracle12C R2+RAC installation test

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Oracle12C R2 has been out for some time and has always wanted to do RAC. However, due to the high requirements of system resources for installing Oracle12C R2 RAC, every installation on my virtual machine fails (mainly because the memory is too small). A few days ago, when I was not busy at work, I turned on the computer and did nothing. I opened a few virtual machines to do RAC experiments, and finally I was successful;-- the kernel of my own notebook memory 16GI7

1. Environmental preparation:

Operating system: Linux7.2-64

Software: linuxx64_12201_database

Linuxx64_12201_grid_home

ASM disk groups: using bare Devic

Oracle12C R2+RAC

Installation media:

Software package

Name

Linux7.1 (64)

Operating system

Linuxx64_12201_database

Database software package

Linuxx64_12201_grid_home

Database cluster software

ASM disk

Shared storage

IP allocation:

Hostnam

Host IP

PRIV

VIP

Rac1

192.168.2.100

10.0.0.1

192.168.2.101

Rac2

192.168.2.200

10.0.0.2

192.168.2.201

SacnIP

192.168.2.210

ISCSI Server IP

192.168.2.88

The two nodes only need to configure public network and private IP, and virtual IP and scanIP are specified in the hosts file.

Installation directory:

Oracle software

/ opt/oracle/product/12/db

Grid software

/ opt/12/grid

CRS software

+ DATT/testa/

-Note below: the shared storage disk I used in my experiment is done by using ISCSI and a separate server. This time, I have opened a total of 3 virtual machines (6 GB of memory for each of the two database hosts and 500m of ISCSI server).

The ISCSI server shared storage configuration steps are omitted, and the commands required are as follows:

The client discovers that the target server shares the disk:

[root@rac2 Server] # iscsiadm-m discovery-t sendtargets-p192.168.2.88

Mount the target server shared disk

[root@rac2] # iscsiadm-m node-- loginall=all

2. Start deploying RAC

2.1 modify the host / etc/host parsing file (both nodes are modified)

[root@rac1~] # vim / etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost

:: 1 localhost6.localdomain6 localhost6

192.168.2.100 rac121

192.168.2.101 rac121-vip

10.0.0.1 rac121-priv

192.168.2.200 rac122

192.168.2.201 rac122-vip

10.0.0.2 rac122-priv

192.168.2.210 scan-rac

[root@rac2 ~] # vim / etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost

:: 1 localhost6.localdomain6 localhost6

192.168.2.100 rac121

192.168.2.101 rac121-vip

10.0.0.1 rac121-priv

192.168.2.200 rac122

192.168.2.201 rac122-vip

10.0.0.2 rac122-priv

192.168.2.210 scan-rac

2.2 install the required software packages (executed on both nodes)

[root@121 Packages] # yum-y install binutils*glibc* libstdc* libaio* libX* make* sysstat* compat-* glibc-* unix*

2.3. Modify linux kernel parameters (both nodes execute)

[root@121~] # vi/etc/sysctl.conf (to be properly configured according to the memory of your own server)

-

Add the following:-- the specific size value is determined by your own memory and can be adjusted by yourself.

Fs.aio-max-nr = 1048576

Fs.file-max = 6815744

Kernel.shmall = 2097152

Kernel.shmmax = 2147483648

Kernel.shmmni = 4096

Kernel.sem = 25032000 100128

Net.ipv4.ip_local_port_range= 9000 65500

Net.core.rmem_default= 262144

Net.core.rmem_max= 4194304

Net.core.wmem_default= 262144

Net.core.wmem_max= 1048586

[root@121~] # sysctl-p-- make the setting effective

2.4 modify system parameters (executed by both nodes)

[root@121 ~] # vim / etc/pam.d/login

Session required pam_limits.so

[root@122 ~] # vim / etc/pam.d/login

Session required pam_limits.so

2.5 create oracle and grid users (both nodes execute)

Create oracle users and grid users

[root@121] # groupadd-G400 oinstall

[root@121] # groupadd-g 401 dba

[root@121] # groupadd-g 402 asmadmin

[root@121] # groupadd-g 403 asmdba

[root@121] # groupadd-g 404 asmoper

[root@121] # groupadd-g 405 oper

[root@121] # useradd-u 400-g oinstall-Gasmadmin,asmdba,asmoper,dba grid

[root@121] # useradd-u 401-g oinstall-Gdba,asmdba,asmadmin,oper oracle

# passwd oracle

# passwd grid

2.6 set oracle user and grid user parameter limits (both nodes)

[root@121 ~] # vim / etc/security/limits.conf

Oracle soft nproc 2047

Oracle hard nproc 16384

Oracle soft nofile 1024

Oracle hard nofile 65536 do it

Grid soft nproc 2047

Grid hard nproc 32768

Grid soft nofile 1024

Grid hard nofile 65536

2.7Create directories required by oracle users and grid users (both nodes execute)

[root@rac121] # mkdir-p / opt/grid

[root@rac121] # mkdir-p / opt/12/grid

[root@rac121] # mkdir-p / opt/oracle/product/12/db

[root@rac121] # mkdir-p / opt/oracle/oradata

[root@rac121] # chown-R grid.oinstall / opt/grid

[root@rac121] # chown-R grid.oinstall / opt/12

[root@rac121] # chown-R oracle.oinstall / opt/oracle/

[root@rac121] # chmod-R 775 / opt/

2.8Settingenvironment variables for oracle user and grid user respectively: (both nodes execute)

First node: Oracle environment variable

Export PATH

Export ORACLE_BASE=/opt/oracle

Export ORACLE_HOME=/opt/oracle/product/12/db

Export ORACLE_SID=testdb

Export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

Export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

First node: grid environment variable

Export ORACLE_BASE=/opt/grid

Export ORACLE_HOME=/opt/12/grid

Export ORACLE_SID=+ASM1

Export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

Second node: Oracle environment variable

Export PATH

Export ORACLE_BASE=/opt/oracle

Export ORACLE_HOME=/opt/oracle/product/12/db

Export ORACLE_SID=testdb

Export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

Export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

Second node: grid environment variable

Export ORACLE_BASE=/opt/grid

Export ORACLE_HOME=/opt/12/grid

Export ORACLE_SID=+ASM2

Export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

2.9 configure SSH mutual trust between rac1 and rac2 nodes (both nodes perform)

Configure two node SSH trusts:

[root@rac121 ~] # su-oracle

[oracle@rac121] $mkdir ~ / .ssh

[oracle@rac121] $chmod 700 ~ / .ssh/

[oracle@rac121] $cd ~ / .ssh/

[oracle@rac121 .ssh] $ssh-keygen-t rsa

[oracle@rac121 .ssh] $ssh-keygen-t dsa

[oracle@rac121 .ssh] $cat id_rsa.pub > > authorized_keys

[oracle@rac121 .ssh] $cat id_dsa.pub > > authorized_keys

[root@rac121 ~] # su-grid

[grid@rac121] $mkdir ~ / .ssh

[grid@rac121] $cd ~ / .ssh/

[grid@rac121 .ssh] $cd..

[grid@rac121] $chmod 700 ~ / .ssh/

[grid@rac121] $cd ~ / .ssh/

[grid@rac121 .ssh] $ssh-keygen-t rsa

[grid@rac121 .ssh] $ssh-keygen-t dsa

[grid@rac121 .ssh] $cat id_rsa.pub > > authorized_keys

[grid@rac121 .ssh] $cat id_dsa.pub > > authorized_keys

(node 2 also does the above, and then integrates the public and private keys of the two nodes and copies them to the two nodes.)

Test after completion, whether it is successful or not

Switch to oracle and grid users on both nodes to perform subordinate operations (successful verification without entering a password)

[root@rac121 ~] # su-oracle

[oracle@rac1 ~] $ssh rac1 date

Thu Nov 27 04:56:46 EST 2014

[oracle@rac121 ~] $ssh rac2 date

Thu Nov 27 04:56:48 EST 2014

[oracle@rac121 ~] $ssh rac1-priv date

Thu Nov 27 04:56:54 EST 2014

[oracle@rac121 ~] $ssh rac2-priv date

Thu Nov 27 04:56:57 EST 2014

3. Mount the shared disk (executed on two nodes in turn)

The client discovers that the target server shares the disk:

[root@121 Server] # iscsiadm-m discovery-t sendtargets-p192.168.2.88

Mount the target server shared disk

[root@122] # iscsiadm-m node-- loginall=all

3.1.Setting available ASM disks

[root@121~] # vim hao.sh-Edit script

For i in c d e f g j h i j

Do

Echo "KERNEL==\" sd?\ ", SUBSYSTEM==\" block\ ", PROGRAM==\" / usr/lib/udev/scsi_id--whitelisted-- replace-whitespace--device=/dev/\ $name\ ", RESULT==\" `/ usr/lib/udev/scsi_id--whitelisted-- replace-whitespace--device=/dev/sd$ i`\ ", SYMLINK+=\" asm-disk$i\ ", OWNER=\" grid\ ", GROUP=\" asmadmin\ ", MODE=\" 0660\ "

Done

[root@121~] # sh hao.sh-execute script

Run the script to get the following:

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-whitelisted-replace-whitespace--device=/dev/$name", RESULT== "14945540000000000ef376caea6d46a84b299aa2af675ec33", SYMLINK+= "asm-diskc", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-whitelisted-replace-whitespace--device=/dev/$name", RESULT== "1494554000000000046b356d577df32a8ebb1bc37aa63263b", SYMLINK+= "asm-diskd", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-whitelisted-replace-whitespace--device=/dev/$name", RESULT== "149455400000000000dbef13af1d00493893edc4ce2ba0109", SYMLINK+= "asm-diske", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-whitelisted-replace-whitespace--device=/dev/$name", RESULT== "14945540000000000fa5fccd4e81b51abc3795d66e58fb835", SYMLINK+= "asm-diskf", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-whitelisted-replace-whitespace--device=/dev/$name", RESULT== "14945540000000000545ef7c7a91cd370b7287e7498981e57", SYMLINK+= "asm-diskg", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-whitelisted-replace-whitespace--device=/dev/$name", RESULT== "1494554000000000078e745363d1683b432ae66cb39a2171d", SYMLINK+= "asm-diskj", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id-whitelisted-replace-whitespace--device=/dev/$name", RESULT== "1494554000000000024c7e5803c9b66544cbc6e847bc36dcd", SYMLINK+= "asm-diskh", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id--whitelisted-replace-whitespace-device=/dev/$name", RESULT== "14945540000000000a633b36c46b565abe1bf9735cc854e85", SYMLINK+= "asm-diski", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd?", SUBSYSTEM== "block", PROGRAM== "/ usr/lib/udev/scsi_id--whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1494554000000000078e745363d1683b432ae66cb39a2171d", SYMLINK+= "asm-diskj", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

Then add the above to the system file:

[root@rac2~] # vim / etc/udev/rules.d/70-persistent-ipoib.rules

Run the system command to rescan the disk:

[root@rac2 ~] # partprobe

Then check to see if the bare device is created successfully

[root@rac121 ~] # ls-l / dev/sd*

4. Install grid software:

Note: the grid software needs to be unzipped to the home directory of grid users and the home directory of our environment:

Export ORACLE_HOME=/opt/12/grid

So, put the software in the / opt/12/grid directory and extract it:

[grid@rac121 ~] $cd / opt/12/grid/

[grid@rac121 grid] $unzip linuxx64_12201_grid_home

Start installing the grid software, as follows:

After the above figure appears, you need to execute the script on two nodes

Each script is executed on node 1 and node 2 in turn:

The following structure at the end of executing the second script indicates that the execution is successful:

After the script is executed, these can be executed further:

When our execution is completed, an error may be reported as shown in the above figure, but after checking the log, we find that this error does not matter and can be ignored.

Check the grid status: from the information below, you can see that there is no problem with our installation.

[grid@rac121] $crs_stat-t

Name Type Target State Host

Ora....SM.lsnrora....er.type ONLINE ONLINE rac121

Ora.DATA.dg ora....up.type ONLINE ONLINE rac121

Ora....ER.lsnrora....er.type ONLINE ONLINE rac121

Ora....AF.lsnrora....er.type OFFLINE OFFLINE

Ora....N1.lsnrora....er.type ONLINE ONLINE rac121

Ora.MGMTLSNR ora....nr.type ONLINE ONLINE rac121

Ora.asm ora.asm.type ONLINE ONLINE rac121

Ora.chad ora.chad.type ONLINE ONLINE rac121

Ora.cvu ora.cvu.type ONLINE ONLINE rac121

Ora.mgmtdb ora....db.type ONLINE ONLINE rac121

Ora....networkora....rk.type ONLINE ONLINE rac121

Ora.ons ora.ons.type ONLINE ONLINE rac121

Ora.proxy_advmora....vm.type OFFLINE OFFLINE

Ora.qosmserverora....er.type ONLINE ONLINE rac121

Ora....21.lsnrapplication ONLINE ONLINE rac121

Ora.rac121.onsapplication ONLINE ONLINE rac121

Ora.rac121.vipora....t1.type ONLINE ONLINE rac121

Ora....22.lsnrapplication ONLINE ONLINE rac122

Ora.rac122.onsapplication ONLINE ONLINE rac122

Ora.rac122.vipora....t1.type ONLINE ONLINE rac122

Ora.scan1.vip ora....ip.type ONLINE ONLINE rac121

[grid@rac121 ~] $

Create ASM disk groups using grid users:

[grid@rac121 ~] $/ opt/12/grid/bin/asmca-execute command

From the figure above, we can see that the ASM disk group we need has been created, and then install the Oracle software:

5. To start using Oracle users, first install Oracle software:

Unpack the software and start the installation, as shown in the following figure:

Install the prompts in the figure above, execute the root.sh script, and then you will see the following prompt for successful installation:

5.2. Use DBCA to create a database:

Now that the installation steps are complete, just wait for the installation to be completed.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report