Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Oracle linux 5.8install grid installation for oracle 11g rac environment

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Installation environment:

Operating system: oracle linux 5.864 bit

Cluster software: linux.x64_11gR2_grid.zip

Database: linux.x64_11gR2_database_1of1.zip,linux.x64_11gR2_database_1of2.zip

Cpu:1

Memory: must be more than 1.5g (usually 2G)

Local disk: 20g (root partition 9G ~ ~ swap is 2G ~ ~ home partition is 9G)

Asm software: oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm

Oracleasmlib-2.0.4-1.el5.x86_64.rpm

Oracleasm-support-2.1.7-1.el5.x86_64.rpm

IP address division: 192.168.3.100 rac1 eth0

192.168.3.101 rac2 eth0

192.168.3.200 rac1-vip eth0:1

192.168.3.201 rac2-vip eth0:1

10.0.0.1 rac1-priv eth2

10.0.0.2 rac2-priv eth2

192.168.3.155 scan-cluster eth0

IP description:

Each server must have two network cards, public ip,priv ip and scan ip must be in the same network segment. Public and priv must write the address in the network card, vip in the address is added by themselves.

1. Install operating system, graphical installation

Disable both iptable and selinux when installing the system

two。 Configure the network card of the virtual machine

Rac1 node:

[root@rac1 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

BOOTPROTO=static

HWADDR=00:0C:29:BE:51:CE

ONBOOT=yes

DHCP_HOSTNAME=rac1

IPADDR=192.168.3.100

NETMASK=255.255.255.0

[root@rac1 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2

BOOTPROTO=static

HWADDR=00:0C:29:BE:51:D8

ONBOOT=yes

HOTPLUG=no

DHCP_HOSTNAME=rac1

IPADDR=10.0.0.1

[root@rac1 ~] # / etc/init.d/network restart

Shutting down interface eth0: [OK]

Shutting down interface eth2: [OK]

Shutting down loopback interface: [OK]

Bringing up loopback interface: [OK]

Bringing up interface eth0: [OK]

Bringing up interface eth2: [OK]

[root@rac1 ~] #

Rac2 node:

[root@rac2 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

BOOTPROTO=static

HWADDR=00:0C:29:7B:C0:13

ONBOOT=yes

DHCP_HOSTNAME=rac2

IPADDR=192.168.3.101

NETMASK=255.255.255.0

[root@rac2 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2

BOOTPROTO=static

HWADDR=00:0C:29:7B:C0:1D

ONBOOT=yes

DHCP_HOSTNAME=rac2

IPADDR=10.0.0.2

[root@rac2 ~] #

3. Configure the hosts file, hostname and IP address resolution (both nodes are the same)

[root@rac1 ~] # vim / etc/hosts

192.168.3.100 rac1.localdomain rac1

192.168.3.101 rac2.localdomain rac2

192.168.3.200 rac1-vip.localdomain rac1-vip

192.168.3.201 rac2-vip.localdomain rac2-vip

10.0.0.1 rac1-priv.localdomain rac1-priv

10.0.0.2 rac2-priv.localdomain rac2-priv

# 192.168.3.155 scan-cluster.localdomain scan-cluster-Note that scan is parsed with DNS, so there is no need to define it in the hosts file, otherwise an error will occur.

[root@rac1 ~] #

4. CD image mount, install software dependency package (the two nodes are the same)

[root@rac1 ~] # mount / dev/cdrom / mnt/

Mount: block device / dev/cdrom is write-protected, mounting read-only

[root@rac1 ~] # vim / etc/yum.repos.d/centos.repo

[centos]

Name=centos

Baseurl= file:///mnt/Server

Gpgcheck=1

Enabled=1

Gpgkey= file:///mnt/RPM-GPG-KEY-oracle

[root@rac1 ~] # yum repolist

[root@rac1] # yum-y install compat-libstdc++-33 elfutils-libelf-devel gcc gcc-c++ glibc-devel glibc-headers libaio-devel libstdc++-devel sysstat unixODBC unixODBC-devel bind bind-chroot bind-lib-y

5. Configure DNS to resolve the IP address of rac1,rac2,scan-cluster

[root@rac1 ~] # cd / var/named/chroot/etc/

[root@rac1 etc] # cp-p named.caching-nameserver.conf named.conf

[root@rac1 etc] # vim named.conf

Listen-on port 53 {any;}

Allow-query {any;}

Allow-query-cache {any;}

Match-clients {any;}

Match-destinations {any;}

[root@rac1 etc] # vim named.rfc1912.zones-add domain name reverse resolution

Zone "3.168.192.in-addr.arpa" IN {

Type master

File "3.168.192.in-addr.arpa"

Allow-update {none;}

}

[root@rac1 etc] # cd / var/named/chroot/var/named/

[root@rac1 named] # cp-p localdomain.zone 3.168.192.in-addr.arpa

[root@rac1 named] # vim 3.168.192.in-addr.arpa

[root@rac1 named] # vim localdomain.zone

[root@rac1 named] # / etc/init.d/named restart

6. Modify kernel parameters (two nodes must be one to)

[root@rac1 ~] # vim / etc/sysctl.conf

Fs.aio-max-nr = 1048576

Fs.file-max = 6815744

Kernel.shmmax = 4294967295

Kernel.shmall = 2097152

Kernel.shmmni = 4096

Kernel.sem = 250 32000 100 128

Net.ipv4.ip_local_port_range = 9000 65500

Net.core.rmem_default = 4194304

Net.core.rmem_max = 4194304

Net.core.wmem_default = 262144

Net.core.wmem_max = 1048586

[root@rac1 ~] # sysctl-p-- make the configuration effective

7. Create user, user group, user set password (two nodes must be one to)

[root@rac1 ~] # groupadd-g 1000 oinstall

[root@rac1 ~] # groupadd-g 1001 dba

[root@rac1 ~] # groupadd-g 1002 oper

[root@rac1 ~] # groupadd-g 1003 asmadmin

[root@rac1 ~] # groupadd-g 1004 asmdba

[root@rac1 ~] # groupadd-g 1005 asmoper

[root@rac2] # useradd-u 1000-g oinstall-G dba,oper,asmdba-d / home/oracle oracle

[root@rac1 ~] # passwd oracle

Changing password for user oracle.

New UNIX password:

BAD PASSWORD: it is based ona dictionary word

Retype new UNIX password:

Passwd: all authentication tokens updated successfully.

[root@rac2] # useradd-u 1001-g oinstall-G dba,asmadmin,asmoper,asmdba-d / home/grid grid

[root@rac1 ~] # passwd grid

Changing password for user grid.

New UNIX password:

BAD PASSWORD: it is based ona dictionary word

Retype new UNIX password:

Passwd: all authentication tokens updated successfully.

[root@rac1 ~] #

8. Create a directory where oracle and grid are stored (two nodes must be one to one)

[root@rac1] # mkdir-p / home/grid/app

[root@rac1 ~] # mkdir-p / home/grid/11.2.0/grid-- the ORACLE_HOME directory cannot be under the ORACLE_BASE directory, otherwise an error will be reported.

[root@rac1] # chown-R grid:oinstall / home/grid/app

[root@rac1 ~] # mkdir / home/oracle/app-p

[root@rac1 ~] # mkdir / home/oracle/app/oracle/product/11.2.0/db_1-p

[root@rac1] # chown-R oracle:oinstall / home/oracle/

[root@rac1 ~] #

9. Modify user restrictions on file opening (one to two nodes)

[root@rac1 ~] # vim / etc/security/limits.conf

Grid soft nofile 1024

Grid hard nofile 65536

Grid soft nproc 2047

Grid hard nproc 16384

Oracle soft nofile 1024

Oracle hard nofile 65536

Oracle soft nproc 2047

Oracle hard nproc 16384

[root@rac1 ~] #

10. Log in to oracle users to add environment variables and ssh trust each other

Rac1 node:

[root@rac1 ~] # su-oracle

[oracle@rac1 ~] $vim .bash _ profile

Export ORACLE_BASE=/home/oracle/app

Export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/db_1

Export ORACLE_SID=racdb01-- the rac2 node is racdb02

[oracle@rac1] $ssh-keygen-t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/ home/oracle/.ssh/id_rsa):

Created directory'/ home/oracle/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in / home/oracle/.ssh/id_rsa.

Your public key has been saved in / home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

F4:4a:af:8d:36:26:ce:5e:b9:75:75:08:09:56:a3:77 oracle@rac1

[oracle@rac1 ~] $cd .ssh /

[oracle@rac1 .ssh] $cat id_rsa.pub > authorized_keys

[oracle@rac1 .ssh] $scp authorized_keys oracle@10.0.0.2:/home/oracle/

The authenticity of host '10.0.0.2 (10.0.0.2)' can't be established.

RSA key fingerprint is b7:fa:04:54:02:f7:84:c3:c1:75:9b:35:8c:de:17:82.

Are you sure you want to continue connecting (yes/no)? Yes

Warning: Permanently added '10.0.0.2' (RSA) to the list of known hosts.

Oracle@10.0.0.2's password:

Authorized_keys 100% 393 0.4KB/s 00:00

[oracle@rac1 .ssh] $

Rac2 node:

[root@rac2 ~] # su-oracle

[oracle@rac2 ~] $vim .bash _ profile

Export ORACLE_BASE=/home/oracle/app

Export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/db_1

Export ORACLE_SID=racdb02

[oracle@rac2] $ssh-keygen-t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/ home/oracle/.ssh/id_rsa):

Created directory'/ home/oracle/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in / home/oracle/.ssh/id_rsa.

Your public key has been saved in / home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

83:5c:10:bf:a5:19:c3:56:07:da:48:52:86:39:d0:e7 oracle@rac2

[oracle@rac2 ~] $mv authorized_keys .ssh /

[oracle@rac2 ~] $cd .ssh /

[oracle@rac2 .ssh] $cat id_rsa.pub > > authorized_keys

[oracle@rac2 .ssh] $chmod 600 authorized_keys

[oracle@rac2 .ssh] $ll

Total 12

-rw- 1 oracle oinstall 786 Sep 14 10:26 authorized_keys

-rw- 1 oracle oinstall 1679 Sep 14 10:26 id_rsa

-rw-r--r-- 1 oracle oinstall 393Sep 14 10:26 id_rsa.pub

[oracle@rac2 .ssh] $

Rac1 node:

[oracle@rac1 .ssh] $scp oracle@10.0.0.2:/home/oracle/.ssh/authorized_keys.

Authorized_keys 100% 786 0.8KB/s 00:00

[oracle@rac1 .ssh] $chmod 600 authorized_keys

[oracle@rac1 .ssh] $ll

Total 16

-rw- 1 oracle oinstall 786 Sep 14 10:27 authorized_keys

-rw- 1 oracle oinstall 1675 Sep 14 10:23 id_rsa

-rw-r--r-- 1 oracle oinstall 393Sep 14 10:23 id_rsa.pub

-rw-r--r-- 1 oracle oinstall 390 Sep 14 10:24 known_hosts

[oracle@rac1 .ssh] $ssh rac2-priv date-check at this time (rac1,rac2,rac1-priv,rac2-priv all pass, otherwise an error will be reported during installation, and check one by one on the rac2 node)

The authenticity of host 'rac2-priv (10.0.0.2)' can't be established.

RSA key fingerprint is b7:fa:04:54:02:f7:84:c3:c1:75:9b:35:8c:de:17:82.

Are you sure you want to continue connecting (yes/no)? Yes

Warning: Permanently added 'rac2-priv' (RSA) to the list of known hosts.

Wed Sep 14 10:31:16 CST 2016

[oracle@rac1 .ssh] $ssh rac2-priv date

Wed Sep 14 10:31:17 CST 2016

[oracle@rac1 .ssh] $

11. Log in to grid users to add environment variables, and ssh trust each other

Rac1 node:

[root@rac1 ~] # su-grid

[grid@rac1 ~] $vim .bash _ profile

Export ORACLE_BASE=/home/grid/app

Export ORACLE_HOME=/home/grid/11.2.0/grid

Export ORACLE_SID=+ASM1

[grid@rac1] $ssh-keygen-t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/ home/grid/.ssh/id_rsa):

Created directory'/ home/grid/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in / home/grid/.ssh/id_rsa.

Your public key has been saved in / home/grid/.ssh/id_rsa.pub.

The key fingerprint is:

F8:89:6c:fc:59:84:92:d8:88:a1:82:c3:ca:1c:a7:35 grid@rac1

[grid@rac1 ~] $cd .ssh /

[grid@rac1 .ssh] $cat id_rsa.pub > authorized_keys

[grid@rac1 .ssh] $scp authorized_keys grid@10.0.0.2:/home/grid/

Authorized_keys 100% 391 0.4KB/s 00:00

[grid@rac1 .ssh] $

Rac2 node:

[root@rac2 ~] # su-grid

[grid@rac2 ~] $vim .bash _ profile

Export ORACLE_BASE=/home/grid

Export ORACLE_HOME=/home/grid/11.2.0/grid

Export ORACLE_SID=+ASM2

[grid@rac2] $ssh-keygen-t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/ home/grid/.ssh/id_rsa):

Created directory'/ home/grid/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in / home/grid/.ssh/id_rsa.

Your public key has been saved in / home/grid/.ssh/id_rsa.pub.

The key fingerprint is:

04:f4:a0:05:13:bd:3c:fc:39:d1:04:40:a1:b1:a6:b4 grid@rac2

[grid@rac2 ~] $mv authorized_keys .ssh /

[grid@rac2 ~] $cd .ssh /

[grid@rac2 .ssh] $cat id_rsa.pub > > authorized_keys

[grid@rac2 .ssh] $chmod 600 authorized_keys

[grid@rac2 .ssh] $ll

Total 12

-rw- 1 grid oinstall 782 Sep 14 10:40 authorized_keys

-rw- 1 grid oinstall 1675 Sep 14 10:40 id_rsa

-rw-r--r-- 1 grid oinstall 391 Sep 14 10:40 id_rsa.pub

[grid@rac2 .ssh] $

Rac1 node:

[grid@rac1 .ssh] $scp grid@10.0.0.2:/home/grid/.ssh/authorized_keys.

Authorized_keys 100% 782 0.8KB/s 00:00

[grid@rac1 .ssh] $chmod 600 authorized_keys

[grid@rac1 .ssh] $ll

Total 20

-Rmurmurr-1 grid oinstall 391 Sep 14 10:38 authorized_keys

-rw- 1 grid oinstall 1675 Sep 14 10:38 id_rsa

-rw-r--r-- 1 grid oinstall 391 Sep 14 10:38 id_rsa.pub

-rw-r--r-- 1 grid oinstall 390 Sep 14 10:38 known_hosts

[grid@rac1 .ssh] $ssh rac1 date-check whether the ssh is valid (all rac1,rac2,rac1-priv,rac2-priv must be tested, otherwise an error will be reported during installation and the rac2 node will also be tested)

The authenticity of host 'rac1 (127.0.0.1)' can't be established.

RSA key fingerprint is b4:90:ba:90:1e:0e:9e:ce:5a:23:70:17:76:e0:6a:9d.

Are you sure you want to continue connecting (yes/no)? Yes

Warning: Permanently added 'rac1' (RSA) to the list of known hosts.

Wed Sep 14 10:43:14 CST 2016

[grid@rac1 .ssh] $ssh rac1 date

Wed Sep 14 10:43:15 CST 2016

[grid@rac1 .ssh] $

twelve。 Modify the user login module and stop the ntpd service (the two nodes are the same)

[root@rac1 ~] # vim / etc/pam.d/login

Session required / lib64/security/pam_limits.so

Session required pam_limits.so

[root@rac1 ~] # mv / etc/ntp.conf / etc/ntp.conf.back

[root@rac1 ~] # / etc/init.d/ntpd stop

Shutting down ntpd: [FAILED]

[root@rac1 ~] #

13. Close the rac1 node, add local disks, and use iscsi shared storage

Disk: 1G / dev/sdb

Voting disk: 1G / dev/sdc

FLASK recovery disk: 10G / dev/sdd

DATA disk: 10G / dev/sde

Rac1 node: (node to add disk)

(1)。 Add a disk to the rac1 node

(2)。 Add parameters to the configuration files of rac1 and rac2

Disk.locking= "FALSE"

Scsi1.SharedBus= "Virtual"

Scsi1.shared= "TRUE"

(3)。 Add existing disks in rac2 and disks created in rac1

14. Install the oracleasm package and create automatic storage (both nodes must be installed)

[root@rac1 ~] # yum install oracleasm*

[root@rac1] # rpm-ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm

[root@rac1] # / etc/init.d/oracleasm configure-I

Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library

Driver. The following questions will determine whether the driver is

Loaded on boot and what permissions it will have. The current values

Will be shown in brackets ('[]'). Hitting without typing an

Answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (yzone) [n]: y

Scan for Oracle ASM disks on boot (yzone) [y]: y

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [OK]

Scanning the system for Oracle ASMLib disks: [OK]

[root@rac1 ~] # / usr/sbin/oracleasm init

[root@rac1 ~] # / usr/sbin/oracleasm status

Checking if ASM is loaded: yes

Checking if / dev/oracleasm is mounted: yes

[root@rac1 ~] # / usr/sbin/oracleasm configure

ORACLEASM_ENABLED=true

ORACLEASM_UID=grid

ORACLEASM_GID=asmadmin

ORACLEASM_SCANBOOT=true

ORACLEASM_SCANORDER= ""

ORACLEASM_SCANEXCLUDE= ""

[root@rac1 ~] # oracleasm createdisk VOL01 / dev/sdb1

Writing disk header: done

Instantiating disk: done

[root@rac1 ~] # oracleasm createdisk VOL02 / dev/sdc1

Writing disk header: done

Instantiating disk: done

[root@rac1 ~] # oracleasm createdisk VOL03 / dev/sdd1

Writing disk header: done

Instantiating disk: done

[root@rac1 ~] # oracleasm createdisk VOL04 / dev/sde1

Writing disk header: done

Instantiating disk: done

[root@rac1 ~] # oracleasm listdisks

VOL01

VOL02

VOL03

VOL04

[root@rac1 ~] # oracleasm querydisk / dev/sd*

Device "/ dev/sda" is not marked as an ASM disk

Device "/ dev/sda1" is not marked as an ASM disk

Device "/ dev/sda2" is not marked as an ASM disk

Device "/ dev/sda3" is not marked as an ASM disk

Device "/ dev/sdb" is not marked as an ASM disk

Device "/ dev/sdb1" is marked an ASM disk with the label "VOL01"

Device "/ dev/sdc" is not marked as an ASM disk

Device "/ dev/sdc1" is marked an ASM disk with the label "VOL02"

Device "/ dev/sdd" is not marked as an ASM disk

Device "/ dev/sdd1" is marked an ASM disk with the label "VOL03"

Device "/ dev/sde" is not marked as an ASM disk

Device "/ dev/sde1" is marked an ASM disk with the label "VOL04"

[root@rac1 ~] #

Rac2 node:

In the rac2 node, you only need to scan the disk and do not need to install and create a new one.

[root@rac2 ~] # / usr/sbin/oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "VOL04"

Instantiating disk "VOL03"

Instantiating disk "VOL01"

Instantiating disk "VOL02"

[root@rac2 ~] # / usr/sbin/oracleasm listdisks

VOL01

VOL02

VOL03

VOL04

[root@rac2 ~] #

15.grid cluster software installation (check the conditions before installation)

[grid@rac1 grid] #. / runcluvfy.sh stage-pre crsinst-n rac1,rac2-fixup-verbose

# check result must be: Pre-check for cluster services setup was successful. Otherwise, the installation is not successful.

There is something wrong with the following figure: Cluster Name: it should be scan-cluster,SCAN Name: it should be scan-cluster.localdomain

Script execution order:

Execute the orainstRoot.sh script on the rac1 node first

Then execute the orainstRoot.sh script on the rac2 node

Finally, execute the root.sh script on the rac1 node

Finally, execute the root.sh script on the rac2 node

Complete

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 241

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report