Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

AIX 6.1 Oracle11g 11.2.0.4 how to install RAC

2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you how to install AIX 6.1 Oracle11g 11.2.0.4 RAC. I hope you will get something after reading this article. Let's discuss it together.

Multiple users and tools are involved in the RAC installation process. Common command prompts are listed here:

The SHELL prompt of # UNIX, indicating the login of the root user

The SHELL prompt of $UNIX, which indicates the login of the oracl user or grid user

Installing RAC is a time-consuming and error-prone process. The most important stage is the preparation before installation. The more preparation, the smoother the installation will be. At this stage, a series of hardware and software configuration, such as configuration storage, network, parameters, permissions and other steps are numerous and interlinked, none of which can be careless. During the implementation of the project, the database engineer is often the late role, in front of the system engineer to install the operating system, install the necessary packages, and connect the network. The storage engineer divides the disk enclosure and installs the multipath software before you can begin to install the Oracle database. Some necessary packages are best communicated with other engineers before installation and installed in accordance with the requirements of the version, so as to avoid the trouble of finding something missing when installing the Oracle database.

The AIX operating system can no longer be installed in a virtual machine, and students who have not come into contact with the AIX operating system will feel strange. In fact, compared with Linux, the installation method is more or less the same, but the previous configuration is somewhat different, the details will be reflected in the following.

System configuration No. 1 hostname

Cjscora01

Host name of Unit 2

Cjscora02

IP and Virtual IP of Unit 1

10.157.140.1 10.157.140.3

IP and Virtual IP of Unit 2

10.157.140.2 10.157.140.4

SCAN IP

10.157.140.5

Database installation configuration

ORACLE_BASE

Grid: / oracle/app/grid

Oracle:/oracle/app/oracle

ORACLE_HOME

Grid: / oracle/app/11.2.0/grid

Oracle: / oracle/app/oracle/11.2.0/db

ORACLE_SERVICE_NAME

Eicdb

Data file path

+ DATA

Oracle administrator account password

Oracle

Database component

Select all

Standard database function

Select all

Initialization parameters:

Memory size

160G

Database parameters:

Db_block_size

Character set (character set)

8k

ZHS16GBK

Whether to use archiving to run the database

Yes

If you use archiving to run the database, archive path location

+ ARCH

Installation process 1. Hardware requirements for each node:

1. Memory: at least 4GB. The server implemented this time is IBM780, and the memory configuration is 192GB.

# / usr/sbin/lsattr-E-l sys0-a realmem

MemTotal: 201326592 kB

2. Swap partition: assign 24GB

# / usr/sbin/lsps-a

SwapTotal: 25165824 kB

Note: the CPU frequency and memory size of all nodes should be roughly the same, so that when switching between nodes, the processing capacity of the two nodes will not be much different. The server used in this project is IBM780, and the memory is 192 GB, 32 CPU. The configuration is quite good.

Second, operating system requirements:

1. System package requirements

AIX 6.1 required packages:

Bos.adt.base

Bos.adt.lib

Bos.adt.libm

Bos.perf.libperfstat 6.1.2.1 or later

Bos.perf.perfstat

Bos.perf.proctools

Rsct.basic.rte (For RAC configurations only)

Rsct.compat.clients.rte (For RAC configurations only)

XlC.aix61.rte:10.1.0.0 or later

Gpfs.base 3.2.1.8 or later (Only for RAC)

APARs for AIX 6.1:

IZ41855

IZ51456

IZ52319

IZ97457

IZ89165

Note: in order to establish a peer-to-peer relationship between the two nodes through SSH, the following software needs to be installed on each node.

.bash

.openssl

.openssh

2. Tmp partition

[root@db1 /] # df-k

At least 1GB

3. System version

[root@db1 /] # oslevel-s

6100-05-11-1140

Note: the preferred operating system version must be the most stable and not necessarily up-to-date. This operating system adopts the relatively conservative 6100-05-11-1140.

The IBM engineer who installed the system this time has a lot of experience, so we recommend this version.

4. System kernel parameters

Smitty chgsys

Maximum number of processes available to a single user [16384]

Note: the default value of this parameter is too small. In a production environment with high concurrency, the number of sessions and process are relatively high. If you only increase the process and the corresponding parameter is not adjusted in time when installing the page, a warning will be issued.

5. Create users and groups, and configure environment variables

Mkgroup -'A' id='1000' adms='root' oinstall

Mkgroup -'A' id='1020' adms='root' asmadmin

Mkgroup -'A' id='1021' adms='root' asmdba

Mkgroup -'A' id='1022' adms='root' asmoper

Mkgroup -'A' id='1031' adms='root' dba

Mkgroup -'A' id='1032' adms='root' oper

Mkuser id='1001' pgrp='oinstall' groups='dba,asmdba,asmadmin,oper' home='/home/oracle' oracle

Mkuser id='1002' pgrp='oinstall' groups='asmadmin,asmdba,asmoper,oper,dba' home='/home/grid' grid

Passwd grid

Passwd oracle

Log in once with grid and oracle users respectively and change the password.

Authorize grid and oracle users

Check permissions

# / usr/bin/lsuser-a capabilities grid

/ usr/bin/lsuser-a capabilities oracle

The authorization command is as follows:

/ usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid

/ usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

6. The Grid user environment variable is set as follows:

Node 1

Umask 022

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_SID=+ASM1; export ORACLE_SID

ORACLE_BASE=/oracle/app/grid; export ORACLE_BASE

ORACLE_HOME=/oracle/app/11.2.0/grid; export ORACLE_HOME

PATH=$ORACLE_HOME/bin:$PATH; export PATH

Node 2

Umask 022

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_SID=+ASM2; export ORACLE_SID

ORACLE_BASE=/oracle/app/grid; export ORACLE_BASE

ORACLE_HOME=/oracle/grid; export ORACLE_HOME

PATH=$ORACLE_HOME/bin:$PATH; export PATH

7. The user environment variable for Oracle is set as follows:

Node 1

Umask 022

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/11.2.0/db; export ORACLE_HOME

ORACLE_SID= oradb1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

PATH=/usr/sbin:$PATH; export PATH

PATH=$ORACLE_HOME/bin:$PATH; export PATH

Node 2

Umask 022

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/11.2.0/db; export ORACLE_HOME

ORACLE_SID= oradb2; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM

PATH=/usr/sbin:$PATH; export PATH

PATH=$ORACLE_HOME/bin:$PATH; export PATH

8. Set the shell limit

Vi / etc/security/limits.conf

Add to the file

Default:

Fsize =-1

Core = 2097151

Cpu =-1

Data =-1

Rss =-1

Stack =-1

Nofiles =-1

Stack_hard =-1

Grid:

Core =-1

Oracle:

Core =-1

Note: for grid users and oracle users, you need to set up their use of resources in SHELL, such as CPU, memory, data segments, and so on. In order for the database to run, the restrictions on these resources need to be lifted, that is,-1, or set to the value recommended by Oracle.

9. Check whether Core File Creation is enabled

Check for enable with the following command

Lsattr-El sys0-a fullcore

Fullcore false Enable full CORE dump True

1. Set the ulimit setting for core dumps to unlimited:

# ulimit-c unlimited

2. Set the ulimit setting for core files to unlimited:

# ulimit-f unlimited

10. Create the corresponding directory

# mkdir-p/oracle/ app/oracle/11.2.0/db

# mkdir-p / oracle/app/grid/

# mkdir-p / oracle/grid

# mkdir-p / oracle/app/oraInventory

# chown-R oracle:oinstall / oracle/app/oracle

# chown-R grid:oinstall / oracle/grid

# chown-R grid:oinstall / oracle/app/grid

# chown-R grid:oinstall / oracle/app/oraInventory

# chmod-R 775 / oracle/

Delete the ntp service and use the oracle time synchronizer

# stopsrc-s xntpd

# mv / etc/ntp.conf / etc/ntp.conf.org

Note: the clock of each node is required to be synchronized during the operation of the RAC cluster. There are two common synchronization methods, one is to use the NTP service provided by the operating system, and the other is to use the cluster time synchronization software provided by Oracle. The second approach is adopted in this project. To avoid conflicts, rename the / etc/ntp.conf file to invalidate it.

12. Adjustment of network parameters

Check parameters

/ usr/sbin/no-a | fgrep ephemeral

Tcp_ephemeral_low = 32768

Tcp_ephemeral_high = 65535

Udp_ephemeral_low = 32768

Udp_ephemeral_high = 65535

Modify the parameters as follows

/ usr/sbin/no-p-o tcp_ephemeral_low=9000-o tcp_ephemeral_high=65500

/ usr/sbin/no-p-o udp_ephemeral_low=9000-o udp_ephemeral_high=65500

13. Adjustment of other parameters

Check if the system is running on compatibility mode

Lsattr-E-l sys0-a pre520tune

If pre520tune enable Pre-520 tuning compatibility mode True is returned

Then the system runs on compatibility mode

The method of modifying parameters is as follows:

# no-o parameter_name=value

Add to the / etc/rc.net file

If [- f / usr/sbin/no]; then

/ usr/sbin/no-o udp_sendspace=65536

/ usr/sbin/no-o udp_recvspace=655360

/ usr/sbin/no-o tcp_sendspace=65536

/ usr/sbin/no-o tcp_recvspace=65536

/ usr/sbin/no-o rfc1323=1

/ usr/sbin/no-o sb_max=4194304

/ usr/sbin/no-o ipqmaxlen=512

Fi

If the execution result of the above command is:

Pre520tune disable Pre-520 tuning compatibility mode True, the system is not running on compatibility mode

The method of modifying parameters is as follows:

/ usr/sbin/no-r-o ipqmaxlen=512

/ usr/sbin/no-p-o rfc1323=1

/ usr/sbin/no-p-o sb_max=4194304

/ usr/sbin/no-p-o tcp_recvspace=65536

/ usr/sbin/no-p-o tcp_sendspace=65536

/ usr/sbin/no-p-o udp_recvspace=655360

/ usr/sbin/no-p-o udp_sendspace=65536

Third, configure the mutual trust between nodes

1. Modify / etc/hosts to add the following:

Vi / etc/hosts

127.0.0.1 loopback localhost # loopback (lo0) name/address

10.157.140.1 cjscora01

10.157.140.3 cjscora01-vip

192.168.150.1 cjscora01-priv

10.157.140.2 cjscora02

10.157.140.4 cjscora02-vip

192.168.150.2 cjscora02-priv

10.157.140.5 cjscora-scan

2. Configure user equivalence

This part of the operation is a traditional practice, after the oracle11g R2 version, we can omit this part of the manual operation, in the graphical installation of cluster software in the process of configuration, with a click of the mouse to complete all the work.

Configure grid user equivalence

Execute the command at Node 1:

$mkdir ~ / .ssh

$chmod 700 ~ / .ssh

$/ usr/bin/ssh-keygen-t rsa

$/ usr/bin/ssh-keygen-t dsa

$touch ~ / .ssh/authorized_keys

$ssh cjscora01 cat / home/grid/.ssh/id_rsa.pub > > authorized_keys

$ssh cjscora01 cat / home/grid/.ssh/id_dsa.pub > > authorized_keys

$ssh cjscora02 cat / home/grid/.ssh/id_rsa.pub > > authorized_keys

$ssh cjscora02 cat / home/grid/.ssh/id_dsa.pub > > authorized_keys

$chmod 600 ~ / .ssh/authorized_keys

$exec / usr/bin/ssh-agent $SHELL

$/ usr/bin/ssh-add

$scp authorized_keys cjscora02:/home/grid/.ssh-copy the grid key to Node 2

$ssh cjscora01 date

$ssh cjscora02 date

$ssh cjscora01-priv date

$ssh cjscora02-priv date

Configure oracle user equivalence

On Node 1, execute the following command to create an oracle key:

$mkdir ~ / .ssh

$chmod 700 ~ / .ssh

$/ usr/bin/ssh-keygen-t rsa

$/ usr/bin/ssh-keygen-t dsa

$touch ~ / .ssh/authorized_keys

$ssh cjscora01 cat / home/oracle/.ssh/id_rsa.pub > > authorized_keys

$ssh cjscora01 cat / home/oracle/.ssh/id_dsa.pub > > authorized_keys

$ssh cjscora02 cat / home/oracle/.ssh/id_rsa.pub > > authorized_keys

$ssh cjscora02 cat / home/oracle/.ssh/id_dsa.pub > > authorized_keys

$chmod 600 ~ / .ssh/authorized_keys

$exec / usr/bin/ssh-agent $SHELL

$/ usr/bin/ssh-add

$scp authorized_keys cjscora02:/home/oracle/.ssh-copy the oracle key to Node 2

$ssh cjscora01 date

$ssh cjscora02 date

$ssh cjscora01-priv date

$ssh cjscora02-priv date

4. Configure ASM disk

-- verify that the disk is available and consistent through the following steps, and that both nodes perform

1. Check the number of disks

/ usr/sbin/lspv | grep-I none

Hdisk4 none None

Hdisk5 none None

Hdisk6 none None

Hdisk7 none None

……

Hdisk315 none None

2. Assign pv id to the disk

The name of the same hard disk in an external storage device may be different on different nodes. The name of the same hard disk in node 1 may be hdisk4, and the name in node 2 may be hdisk5. Although the ASM instance can correctly identify such a hard disk, it may be ambiguous to the administrator. In any case, an immutable attribute of this hard drive is PVID, which is the same under node one or node two. The PVID of the hard disk can be generated with the following command

Chdev-l hdisk4-a pv=yes

Chdev-l hdisk5-a pv=yes

Chdev-l hdisk6-a pv=yes

Chdev-l hdisk7-a pv=yes

……

Chdev-l hdisk100-a pv=yes

3. Check whether the pv id corresponds one to one between the two nodes

Cjscora01#lspv

Hdisk4 00f76fa9f361157b None

Hdisk5 00f76fa9f36116a5 None

Hdisk6 00f76fa9f36117d0 None

Hdisk7 00f76fa9f3611901 None

……

Hdisk315 00f76fb4f3612dce None

Cjscora02# lspv

Hdisk4 00f76fa9f361157b None

Hdisk5 00f76fa9f36116a5 None

Hdisk6 00f76fa9f36117d0 None

Hdisk7 00f76fa9f3611901 None

……

Hdisk315 00f76fb4f3612dce None

4. Modify disk permissions

This project uses ASM, so you need to specify the disks to be included in ASM disks and ensure that grid users have write access to these disks.

# chown grid:asmadmin / dev/rhdiskn-- example of a command that modifies a disk group

Chown grid:asmadmin / dev/rhdisk4

Chown grid:asmadmin / dev/rhdisk5

Chown grid:asmadmin / dev/rhdisk6

Chown grid:asmadmin / dev/rhdisk7

……

Chown grid:asmadmin / dev/rhdisk315

# chmod 660 / dev/rhdiskn-example of a command to modify disk permissions

Chmod 660 / dev/rhdisk4

Chmod 660 / dev/rhdisk5

Chmod 660 / dev/rhdisk6

Chmod 660 / dev/rhdisk7

……

Chmod 660 / dev/rhdisk315

5. Check disk properties

Some models of storage devices have reserve_lock or reserve_policy attributes, which prevent multiple nodes from reading and writing to storage devices in parallel, so their properties should be modified before installing cluster software.

Lsattr-E-l hdisk4 | grep reserve_

Lsattr-E-l hdisk5 | grep reserve_

Lsattr-E-l hdisk6 | grep reserve_

Lsattr-E-l hdisk7 | grep reserve_

……

Lsattr-E-l hdisk315 | grep reserve_

6. Set the disk property to reserve_lock=no or reserve_policy=no_reserve

Chdev-l hdiskn-a [reserve_lock=no | reserve_policy=no_reserve]

Chdev-l hdisk4-a reserve_policy=no_reserve

Chdev-l hdisk5-a reserve_policy=no_reserve

Chdev-l hdisk6-a reserve_policy=no_reserve

Chdev-l hdisk7-a reserve_policy=no_reserve

……

Chdev-l hdisk100-a reserve_policy=no_reserve

7. Clear disk pvid

With PVID, you can compare and confirm the same hard disk in different nodes, but you should clear the PVID on each node before installing the cluster software, otherwise errors may occur during installation.

Chdev-l hdisk4-a pv=clear

After reading this article, I believe you have some understanding of "how to install AIX Oracle11g 11.2.0.4 RAC". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report